At 00:10 +0200 2001/06/11, Lars Hellström wrote:
>In this case, I suspect
>the labels should be thought of as being nestable with separate markers for
>beginning and end, so that each token list that is formed gets delimited by
>matching begin and end labels that record the current context of the token
>list they were extracted from.
> then it doesn't matter if it is inserted into a French context table of
>contents. Upon being written to an external file, the labels should be
>converted to suitable markup.
This is what also I am saying: If one makes sure to nest those localization
contexts consequently, the logical part of it should be fairly
>An interesting question is whether these labels should be explicit tokens
>or be hidden from the user (i.e., argument grabbing and things like
>\futurelet look past them). Making them explicit tokens would probably
>break tons of code.
This is the hard part, how the localization contexts should be defined in
the input, so as to be convenient for the user.
>As for what the labels should be to the user, I think a scheme of making
>them integers is pretty useless (how they are implemented is of course
Localization numbers would be as accessible to the user as input encoding
numbers, that is, normally the user would not use them at all. Standardized
localization numbers also requires that the localization specs can be made
so specific that one normally would not bother to override (parts of) them.
> A better idea would be to make them some kind of property
>lists, i.e., containers for diverse forms of information that are indexed
>by some kind of names.
The normal way is to make a lookup table, that is, a type of environment.
One then looks up a name recursively up through the tables towards the top
for the first occurrence, just as in any computer language defining
However, as a matter for implementation, one would not want to carry that
much information around. Further, nothing really says that different
localizations will have the same setup up of variables. So there must be a
way to first identify which localization to use, and from that proceed to
> Creating new label values from old by copying the
>values and then changing some would be useful when defining dialects.
The picture that I have in my mind is that all (standard) localization
specs should be loaded (at need) in parallel; it is then easy to define a
customized localization spec by picking variables from already defined
ones. Alternatively, one defines entirely new localized variables.
Note that there is a difference in behavior: If I define my own localized
dictionary, it will not change when any other localization dictionary is
updated. But if I define my localized dictionary on top of an already
defined dictionary, then the dictionary I use will be updated when the
already defined dictionary is updated.
These are different types of behavior, and I think one must accomodate for
>The main problem I see with context labels is that of when they should be
This is the difficult one.
> I can think of at least three different models:
>1. Labels must be present in the input (e.g. encoded using control
>2. Do as today, i.e., context switches are initiated when commands are
Perhaps a hybrid: Localization labels are not of Unicode, but it may be
possible to define such formats using a suitable extension of the Omega
translator, and LaTeX may decide to use such formats for writing and
reading .aux files and the like.
One the other hand, one wants to have convenient context switches within
the LaTeX language itself.
Whether one uses long formats such as <begin french> ... <end french> or
shorter, character based formats, is probably just a question of