LATEX-L Archives

Mailing list for the LaTeX3 project

LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
Mailing list for the LaTeX3 project <[log in to unmask]>
Date:
Thu, 10 May 2001 19:00:26 +0200
Content-Type:
text/plain
Parts/Attachments:
text/plain (53 lines)
Quick answers to a couple of points. Lars says:

>The comparison in Section 3.2.1 of how characters are processed in TeX and
>Omega respectively also seems strange. In Omega case (b), column C, we see
>that the LICR character \'e is converted to an 8-bit character "82 before
>some OTP converts it to the Unicode character "00E9 in column D. Surely
>this can't be right---whenever LICR is converted to anything it should be
>to full Unicode, since we will otherwise end up in an encoding morass much
>worse than that in current LaTeX.

Surely it's right :-). Remember that é is not an active character in
lambda and that ocp's are applied after expansion. Let's consider
the input é\'eé. It's expanded to the character sequence "82 "82 "82,
which is fine. If we define \'e as "00E9 the expansion is "82 "00 "E9
"82, which is definitely wrong. Further, converting the input to Unicode
at the LICR level means that the auxiliary files use the Unicode encoding;
if the editor is not a Unicode one these files become unmanageable and messy.
LICR should preserve, IMO, the current LaTeX conventions, and é\'eé
should be written to these files in exactly that way. Or in other words,
any file to be read by LaTeX should follow the "external" LaTeX
conventions and only transcoded in the mouth.

>As I understand the Omega draft documentation, there can be no more than
>one OTP (the \InputTranslation) acting on the input of LaTeX at any time
>and that OTP in only meant to handle the basic conversion from the external
>encoding (ASCII, latin-1, UTF-8, or whatever) to the internal 32-bit
>Unicode. All this happens way before the input gets tokenized, so there is

In fact, \InputEncoding was not intended for that, but only for
"technical" translations which applies to the whole document
as one byte -> two byte or little endian -> big endian. The main
problem of it is that it doesn't translate macros:
\def\myE{É}
\InputEncoding <an encoding>
É\myE

only the explicit É is transcoded. However, that can be desirable
under some circumstances, but you know in advance which encodings
will be used. More dangerous is the following:

\comenzar{enumeración} % Spanish interface with, say, MacRoman
                       % \comenzar means \begin
\InputEncoding <iso hebrew>

\terminar{enumeración} % <- that's transcoded using iso hebrew!

Regards
Javier


_____________________________________________________________________
Conoce la que será la película del verano y llévate una camiseta de cine en http://www.marujasasesinas.com/html/concurso.html

ATOM RSS1 RSS2