## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

 Options: Use Forum View Use Monospaced Font Show Text Part by Default Condense Mail Headers Message: [<< First] [< Prev] [Next >] [Last >>] Topic: [<< First] [< Prev] [Next >] [Last >>] Author: [<< First] [< Prev] [Next >] [Last >>]

Bruno Le Floch skrev 2011-10-13 20.23:
> The main problem is that we are not only manipulating booleans, but
> _predicates_, which take arguments. Thus my approach of making
> predicates expand to {<code which has grabbed its arguments>  }, with
> the extra braces. Then everything else is a matter of getting the
> logic right.
>
>>     \bool_if:nTF { \bool_pred:n{\l_my_first_bool}&&
>>      ( \bool_pred:n{\l_my_second_bool}&&  !\bool_pred:n{\l_my_third_bool} ) }
>

A better formatting would be

\bool_if:nTF {
\bool_pred:n{\l_my_first_bool} && (
\bool_pred:n{\l_my_second_bool} &&
! \bool_pred:n{\l_my_third_bool}
)
}

:) But maybe the problem is that &&, (, ), !, and || do not really stand out
from the background of braces and backslashes (far less than against the
background of alphanumeric identifiers you'd encounter in C)? Fortanish
.AND., .OR., and .NOT. would be more eye-catching, but the parentheses are

>> Whether whatever scheme you're currently using to parse these expressions
>> could be taught to insert the \bool_pred:n (or just \c_true_bool
>> \c_false_bool) automatically is of course another matter.
>
> That's not possible with e.g., \str_if_eq_p:xx {a} {a} if it is
> exactly the same as \str_if_eq:xxTF{a}{a}, because we never know how
> many arguments the predicate takes.

Well, it depends on the parsing scheme one uses. If instead going for
delimited-argument style subformula grabbing, then it wouldn't matter how
many tokens an expression terminal (predicate) consists of, and as a
side-effect operation priority becomes straightforward (the operation you
split at first gets the lowest priority). What would be tricky for this
approach is the handling of parentheses, since in that case there are two
distinct possibilities for "the next token of interest here", but I think it
is doable (first split on one, then try to split on the second, and treat
the combination of the outcomes as a case).

I believe one could preferably structure the whole thing as an expand-only
rewrite of infix boolean expression to Church-style compositions of
booleans. It probably wouldn't be good at catching certain syntax errors,
though.

> With the extra pair of braces as
> proposed in my previous mail, things can be made to work (and fast).

That's good too, then.

>> Another thing about the Church booleans is that they do not require a
>> framework to be useable; they can be used directly also in code written with
>> \ExplSyntaxOff.
>
> I don't see how you could use \l_my_church_bool (or any other
> predicate) without \ExplSyntaxOn.

Consider a command whose role is similar to that of the 2e \makelabel: A
user-definable command which gets its argument(s) from more basic levels of
LaTeX, and is supposed to do something in a configurable way. ((In the
future all such commands should be template keys, you say? Why, yes, it may
well be. Template instances will often be defined with \ExplSyntaxOff, I
think.)) Suppose further that one of these arguments is a boolean. ((Poor
design? Well, such things will happen; it's an open system.)) Now, if said
boolean is a Church boolean (effectively \use_i:nn or \use_ii:nn), then it
can be used directly to select between two pieces of code, e.g. like

\def\makesomething#1#2{% Assuming \ExplSyntaxOff
% #1 is some text
% #2 is the boolean
#1%
#2{ \thetheorem}{}%
.\ %
}

Here, \makesomething{Theorem}{\use_i:nn} will produce e.g. "Theorem 3.4. ",
whereas \makesomething{Theorem}{\use_ii:nn} produces "Theorem. "

Yes, templates tend to handle these situations by having two separate keys
for the two cases. No, I don't think that package authors will therefore
never decide to pass booleans to these kinds of commands.

>> I can see that becoming convenient every now and then, for
>> document preamble command definitions (even though there arguably has to be
>> a leaky abstraction somewhere for them to even become exposed, such things
>> will happen).
>
> Just like document preambles often need \makeat(letter|other), they
> could use \ExplSyntax(On|Off) if needed.

Yeah, but it's a little like admitting a small defeat when one *has* to do
that (or venture out to strange TeX primitives). For one thing, it
significantly complicates managing one's preamble: Are these definitions
with @ letter or @ other? Why didn't that definition work when I copied it
from my previous paper to this new one? Oh, I missed that \makeatletter on
the previous screen page!##@!*!

> If the issue is with
> preserving spaces, then \ExplNames(On|Off) is also provided,

No, and using that adds even further complications.

> and (infix) boolean expressions work there as well.

Within \ExplNamesOn, you mean? Making sure that *all* stray spaces are
*always* gobbled could be difficult to prove, but I can see that there at
least is a fair chance of this happening (thanks to undelimited macro
arguments skipping spaces).

> We could provide \bool_and:nn etc. (or some variation thereof). One
> problem is where to stop: here you've used the three argument variant
> \add:nnn, but we should then provide a four argument variant, etc.

Even if you include everything up to the nine argument forms, you'll end up
with fewer macros in total than for the alternative. ;-)

> Hence the input should be some kind of list, which ( a&&  b&&  c&&
> ...&&  d ) is.

{a}{b}{c}{d} is even more a list, even if of undeterminate length. :-)

> It is rather unfortunate indeed that&  has to be used
> there: its catcode leads to trouble. I don't see what other character
> could reasonably be used, unless we go for "and" and "or" (that would
> be just as fast to parse).
>
> On a separate point: we haven't provided things like \int_add:nn or
> \int_mul:nn as a prefix replacement for + or * in \int_eval:n (eTeX's
> \numexpr), because that doesn't improve legibility. I find that the
> current \bool_if:nTF with infix operators (except \xor) fits nicely
> with the syntax of \int_eval:n, \dim_eval:n, and a future \fp_eval:n.

A couple of years ago, the Tcl community faced a similar situation:
Arithmetic had always been expressed in an infix fashion (using the [expr]
command, which is very similar semantically to these \XX_eval:n), even
though the language as such is strongly prefix in nature. Proposals were
made to add prefix arithmetic commands as a supplement. Some argued against,
etc.), but in the end the prefix forms were added. In the years since, they
have proven to be quite useful at times, even though hardly anyone uses them
exclusively.

>> Along that line of though, I've also toyed with the idea of having an xparse
>> o argument return either
>>
>>       \NoValue       or
>>       \YesValue{<argument>}
>>
>> where
>>
>> \cs_new:Npn \NoValue {-NoValue-}
>> \cs_new_eq:NN \YesValue \use_i:n
>> \cs_new:Npn \IfNoValueTF #1 { \use_iii:nnn #1 \use_iii:nnn \use_i:nn }
>>
>> It's not quite as elegant as the Church booleans, but strikingly simple.
>
> And much faster, indeed, than \pdfstrcmp. This would usually get my
> vote. However, xparse is at the user level, so a few micro-seconds
> gained here and there (that's what \benchmark:n is giving me) are not
> going to make any sizeable difference. Also, I find giving the
> arguments as "\YesValue{<argument>}" to the user quite awkward.

Well, count this as brainstorming. It occurred to me that in the vast
majority of cases, it wouldn't matter to me whether I got
\YesValue{<argument>} or simply <argument>.

Joseph's remark that:
> You then need to remember that part of
> the defined semantics of \NoValue is that its protected, so it's not
> clear what should happen about protection for \YesValue.

is what kills this idea; I hadn't considered that aspect of \NoValue.

Lars Hellström