LATEX-L Archives

Mailing list for the LaTeX3 project

LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

Options: Use Forum View

Use Monospaced Font
Show HTML Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
Mailing list for the LaTeX3 project <[log in to unmask]>
Date:
Wed, 12 Oct 2011 13:35:20 -0400
Reply-To:
Mailing list for the LaTeX3 project <[log in to unmask]>
Message-ID:
Subject:
MIME-Version:
1.0
Content-Transfer-Encoding:
8bit
In-Reply-To:
Content-Type:
text/plain; charset=ISO-8859-1
From:
Bruno Le Floch <[log in to unmask]>
Parts/Attachments:
text/plain (146 lines)
On 10/12/11, Bruno Le Floch <[log in to unmask]> wrote:
> On 10/12/11, Joseph Wright <[log in to unmask]> wrote:
>> On 12/10/2011 15:52, Lars Hellström wrote:
>>> Last week, I was reading up a bit on lambda calculus (a subject with
>>> which I suspect there are list members far more familiar than I am). One
>>> thing that caught my eye was the definitions of True and False, using
>>> the so-called Church booleans
>>>
>>>   True = \lambda xy . x
>>>   False = \lambda xy . y
>>>
>>> These may look strange, but turn out to be two objects that are very
>>> familiar to us: \use_i:nn and \use_ii:nn respectively (or \@firstoftwo
>>> and \@secondoftwo, for those who still think in 2e terms (like me)).
>>> Indeed, putting my mind in LaTeX mode made the lambda calculus concepts
>>> much easier to digest.
>>>
>>> But with such thinking comes also the converse association: might LaTeX
>>> programming benefit from borrowing concepts from lambda calculus? In
>>> particular: might the Church booleans be useful as general-purpose
>>> booleans in LaTeX3?!?
>>>
>>> (A rather early thought then was that Frank's generation probably knows
>>> all about lambda calculus and therefore must have thought about this
>>> already. But I don't recall having seen it discussed anywhere.)
>>>
>>> One thing that is really nice about the Church booleans is how they
>>> support the logical operations. One can define
>>>
>>>   \cs_new:Npn \and:nnTF #1 #2 { #1 {#2} {\use_ii:nn} }
>>>   \cs_new:Npn  \or:nnTF #1 #2 { #1 {\use_i:nn} {#2} }
>>>   \cs_new:Npn \not:nTF  #1 { #1 {\use_ii:nn} {\use_i:nn} }
>>>
>>> and go
>>>
>>>   \cs_set_eq:NN \foo_a_bool \use_i:nn
>>>   \cs_set_eq:NN \foo_b_bool \use_ii:nn
>>>   \or:nnTF {
>>>      \and:nnTF { \foo_a_bool } { \foo_b_bool }
>>>   } {
>>>      \not:nTF { \foo_b_bool }
>>>   } {True} {False}
>>>
>>> to get the correct result True. These \and:, \or:, and \not: can even
>>> operate on general \...:TF constructions, and they work correctly
>>> regardless of whether the \...:TF is expandable or needs to be executed!
>>> (Like e.g. \regex_match:nnTF.) This is already much nicer than what one
>>> can easily get from the primitive \if... \fi conditionals. Using this
>>> for LaTeX3 booleans would also have the interesting property that
>>> "predicates" and TF forms become the exact same thing!
>>>
>>> I can imagine that a downside of using this would be that it suggests a
>>> coding style that could be considered "too exotic". It sometimes seems
>>> that l3 tries to make things look very much like they would in a
>>> traditional imperative programming language, even though what goes on
>>> under the hood can be quite different. A concept like Church booleans
>>> rather furthers functional idioms of programming, where a lot of the
>>> "data" you pass around is also "functions" (or partially applied
>>> functions), that can be applied with little or no syntactic framing in
>>> the source code.
>>>
>>> Now I wonder: Would it be useful? (I think it would.) Would there still
>>> be time to change? Does it seem worth it?
>>
>> One obvious question is how easy would it be to support predicate
>> decision making in such as scheme:
>>
>>   \bool_if:nTF
>>     { \l_my_first_bool && ( \l_my_second_bool && !\l_my_third_bool ) }
>>
>> which is one of the reasons for the approach currently taken.
>
> I had the same thought as Joseph. As far as I can tell, the infix
> notation requires us to have unexpandable predicates/booleans (more
> precisely, which stop f-expansion), because those can come in all
> sorts of forms, not necessarily as single tokens. One option would be
>
> \cs_set:Npn \c_true_bool { \exp_stop_f: \use_i:nn }
> \cs_set:Npn \c_false_bool { \exp_stop_f: \use_ii:nn }
> \cs_set:Npn \bool_if:NTF #1 { \tex_romannumeral:D -`q #1 }
>
> That can probably be made to work within boolean expressions, with
> roughly the same efficiency as the current approach. However, it still
> doesn't work for non-expandable tests.
>
> [1] http://article.gmane.org/gmane.comp.tex.latex.latex3/2187
>
> I think that Lars has in the past [1] advocated against the infix
> syntax as being an attempt to impose upon TeX some foreign and not
> very appropriate syntax (please correct me if I misunderstood you). In
> this perspective, providing \and:nn etc. as Lars proposes would be
> much faster than the current approach, and would accomodate trivially
> for non-expandable conditionals.
>
> One possibility, yet, which may work (haven't thought for long enough
> about it) is to require each "operand" of an infix boolean expression
> to be set in braces (if more than one token). Then the expression
> parser can read the operand and the operation, and leave the operand
> followed by the two appropriate \bool_ functions to continue parsing.
> There are some issues with non-expandability and
> \group_align_safe_begin:, but that might be solvable.
>
> \alt_bool_if:nTF
>   {
>      { \tl_if_in:nnTF { #1 } { a } }
>      || {
>           { \str_if_eq:xxTF { #1 } { bcd } }
>           && \l_my_bool
>         }
>      || \l_my_other_bool
>   }
>   { ... } { ... }

On this last point again, we actually don't need to require the _user_
to write that. Simply define predicates to enclose themselves in
braces after grabbing their argument. For instance,

\cs_set:Npn \tl_if_in_p:nn #1#2 { { \tl_if_in:nnTF {#1}{#2} } }

Then f-expanding safely stops at the left brace, the \bool expression
parsing can take in two arguments, check what #2 is to know what
operation is next. The details are tricky, but that would work, I
believe, and allow non-expandable material in boolean expressions,
with the same efficiency, I believe, as currently. The only drawback I
see so far (there are probably some other hidden ones), besides the
weekend and a half that it will take to update the code, is that now
\bool_if:nTF can be expandable or not depending on conditionals
appearing in its first argument. From the user's point of view that's
probably expected and easy to explain. The issue is that \bool_if:nTF
starts and ends with \group_align_safe_begin/end:. If

- \bool_if:nTF appears at the start of a cell, when TeX is still
looking ahead for \omit
- and the first argument contains non-expandable predicates

then \group_align_safe_begin: is seen before the u-part of the
preamble, hence has no effect on the master counter, and
\group_align_safe_end: is seen after the u-part of the preamble, hence
causes an "extra brace" error. I believe that this is rare enough that
documenting it and saying "use \scan_align_safe_stop: \bool_if:nTF" is
reasonable.

On those thoughts, let me go back to my day job.
--
Bruno

ATOM RSS1 RSS2