## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

 Options: Use Forum View Use Monospaced Font Show Text Part by Default Show All Mail Headers Message: [<< First] [< Prev] [Next >] [Last >>] Topic: [<< First] [< Prev] [Next >] [Last >>] Author: [<< First] [< Prev] [Next >] [Last >>]

 Subject: \InputEncoding [Was: Multilingual Encodings Summary 2.2] From: Marcel Oliver <[log in to unmask]> Reply To: Mailing list for the LaTeX3 project <[log in to unmask]> Date: Sat, 19 May 2001 19:20:01 +0200 Content-Type: text/plain Parts/Attachments: text/plain (57 lines)
Frank Mittelbach writes:
>  > In fact, \InputEncoding was not intended for that, but only for
>  > "technical" translations which applies to the whole document
>  > as one byte -> two byte or little endian -> big endian. The main
>  > problem of it is that it doesn't translate macros:
>  > \def\myE{É}
>  > \InputEncoding <an encoding>
>  > É\myE
>
> \InputEncoding is the point where one need to go from external
> source encoding to OICR that is precisely the wound: the current
> \InputEncoding isn't doing this job fully (and that it is not clear
> how to do it properly (to be fair))

- There is one default \InputEncoding (which may need to be specified
at the time of format creation).  This encoding is the one that all
macro names need to be in, as well as the encoding initially
selected for text (I think it does not make any sense to allow for
multiply encoded macro names in a single document).  As there is no
legacy cruft with regard to macro names, we may as well force this
default encoding to be UTF-8.

- Changes in the \InputEncoding follow the usual TeX scoping rules
(this is obviously not how Omega currently does it), and take effect
immediately during the initial tokenization.  This would mean that
the characters \ { } must be in their expected position in every
permissible encoding, but I guess that's not any more restrictive
than what we currently have.  I also assume that TeX (Omega) always
knows whether it is parsing code or text, so that it can select the
default for code, and the top of the encoding stack for text.

- Regarding Javier's above example: I think this is the correct and
expected behavior.  I want to be able to able to write:

\begin{chinese}
\newcommand{\foo}{***something chinese***}
\newcommand{\bar}{***and some more chinese***}
\end{chinese}

The chinese characters \foo\ and \bar\ are not easy to enter on a
western keyboard.  If you need to frequently use \foo\ in your
scholarly discussion of Chinese literature, it is better to first
define macros for all the chinese characters you need, and then just
write \verb|\foo| whenever you need \foo.

(I don't know if this babel-like begin-end of a language selection
would actually be legal in the document preamble,  but I think the
strategy is very natural at least.)

- It may be more of a problem how to deal with \'e and the like.
Would it be possible to force immediate expansion into the
corresponding internal Unicode token?

Marcel