## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

 Options: Use Forum View Use Monospaced Font Show Text Part by Default Show All Mail Headers Message: [<< First] [< Prev] [Next >] [Last >>] Topic: [<< First] [< Prev] [Next >] [Last >>] Author: [<< First] [< Prev] [Next >] [Last >>]

Hi, the messages on the list over the last couple of days have been
pretty encouraging (especially those from the non-anglo-germanic
contributors) so I figure it's worth intensifying the lobbying effort
for clean multilingual support in LaTeX...

The following is an extended summary of the discussion (clearly
biased).  I encourage everybody to review, change, and extend this
summary with as much detail as possible.  I will merge any changes to
the document and repost it on the list and/or on the web.

It's important that we don't keep iterating over the same things,
but rather build a solid base of arguments and clarify the design
goals.

--Marcel

--------------------------------------------------------------------

MULTILINGUAL SUPPORT IN LATEX3: WHAT ARE THE ISSUES?

Version 1.0
2001/02/13

M.O. with contributions from the LATEX-L list and

1. Input Encoding and User Interface:

1.1. Current State:

Currently, it is difficult to enter non-English or multilingual
scripts.  Users can either provide an ASCII input file, or select an
input encoding.  While it is currently possible to produce high
quality print in many scripts, there are serious usability problems.

- Typing ASCII can be very tedious, and makes it hard to proofread the
.tex file.  Portability is good in theory, but since nothing works
out of the box can be a pain in practice.

- Setting an input encoding may works well for some languages.
However, it's not a solution for multilingual work (unless, for
example, UTF8 is the chosen input encoding), few scripts are well
supported (even something as simple as ISO-8859-7 for Greek requires
fishing on the net to make it work).

- In both cases diagnostic messages can be confusing to the point of
being useless.

1.2. The Case for UTF8 as Default Input Encoding:

There is a good summary at http://www.cl.cam.ac.uk/~mgk25/unicode.html
which does not need to repeated in detail here.  Basic points:

- All the ASCII characters have their usual position in UTF8.  In
other words, current ASCII .tex files would continue to work without
anybody noticing.

- UTF8 encodes Unicode which covers virtually all scripts.

- On all major platforms, support for editing and displaying UTF8
exists and either is currently moving into mass deployment.  Major
programming languages have UTF8 libraries, so the basic
infrastructure for UTF8 is or will be in place shortly.

- Diagnostic messages could (although not with current TeX engine) be
output in the correct script.  This would be a major improvement for
users.  (Is actually more related to the internal encoding, see
below.)

1.3. Existing Implementations:

- There is an implementation for UTF8 input on a TeX engine (xmltex by
David Carlisle) that also uses UTF8 as the internal representation.

- There also exists a UTF8 option for the inputenc package (more
info???).

- The "combining characters" of Unicode are difficult to handle with a
TeX based parser.  (Does "difficult" mean "impossible to get
right"???  What are the issues???)

- TeX based parsers may not handle input errors gracefully (i.e. give
meaningful error messages).  (Can someone confirm or correct
this???)

- Using UTF8 on TeX internally gives a performance hit too big to
justify as a default.  (Does this apply to the UTF8 inputenc package
as well???)

- There is Omega as a native Unicode implementation of TeX.  More
below.

2. Internal Representation and Output Encoding:

2.1. Problems with Current TeX:

It has been remarked that TeX does not really have an "internal
representation".  Rather, TeX keeps text as a string of ASCII
characters that are re-parsed through the one-and-only TeX parser
whenever something is to be done with it.  (TeX gurus: is this
simplistic statement essentially correct???)

This leads to a number of problems.

- A sufficiently general internal multilingual representation may be
impossible to maintain, unless it is Unicode in disguise.

- Hyphenation patterns are specified in terms of the output encoding.
This means that every character appearing in the hyphenation rules
must have a physical slot in the selected font.  However, logically
hyphenation should not depend on output encoding, and one should be
able to mix fonts with different output encodings without losing
correct hyphenation.

- It is rather hard to make a new font available under LaTeX.
Essentially one must create a virtual font which has all the
character slots in the places where hyphenation expects them to be.

- TeX diagnostic messages output the "internal representation", which
can quickly become unreadable for scripts that are not essentially
ASCII.

- The output encoding is limited to 8 bit fonts, which may not be
enough to get correct kerning for some languages. (Can someone
confirm or correct this???)

2.2. How Omega Separates Internal an Output Encoding:

(The following is stolen from Javier Bezos)

Let's now explain how TeX handle non ascii characters. TeX can read
Unicode files, as xmltex demonstrates, but non ascii chars cannot be
represented internally by TeX this way. Instead, it uses macros which
are generated by inputenc, and which are expanded in turn into a true
character (or a TeX macro) by fontenc:

é --- inputenc --> \'{e}  --- fontenc --> ^^e9

That's true even for Cyrillic, Arabic, etc. characters!

Omega can represent internally non ascii chars and hence actual chars
are used instead of macros (with a few exceptions).  Trivial as it can
seem, this difference is in fact a HUGE difference. For example, the
path followed by é will be:

é --an encoding ocp-|           |-- T1 font ocp-->  ^^e9
+-> U+00E9 -+
\'e -fontenc (!)----|           |- OT1 font ocp -> \OT1\'{e}

It's interesting to note that fontenc is used as a sort of input
method! (Very likely, a package with the same functionality but with
different name will be used.)

For that to be accomplished using ocp's we must note that we can
divide them into two groups: those generating Unicode from an
arbitrary input, and those rendering the resulting Unicode using
suitable (or maybe just available :-) ) fonts. The Unicode text may be
so analyzed and transformed by external ocp's at the right
place. Lambda further divides these two groups into four (to repeat,
these proposals are liable to change):

- encoding: converts the source text to Unicode.

- input: set input conventions. Keyboards has a limited number of
keys, and hands a limited number of fingers.  The goal of this group
is to provide an easy way to enter Unicode chars using the most
basic keys of keyboards (which means ascii chars in latin
ones). Examples could be:

*  --- => em-dash  (a well known TeX input convention).
*  ij => U+0133 (in Dutch).
*  no => U+306E [the corresponding hiragana char]

Now we have the Unicode (with TeX tags) memory representation which
has to be rendered:

- writing: contextual analysis, ligatures, spaced punctuation marks,
and so on.

- font: conversion from Unicode to the local font encoding or the
appropriate TeX macros (if the character is not available in the
font).

This scheme fits well in the Unicode Design Principles, which state
that that Unicode deals with memory representation and not with text
rendering or fonts (with is left to "appropriate standards"). Hence,
most of so-called Unicode fonts cannot render properly text in many
scripts because they lack the required glyphs.

There are some additional processes to "shape" changes (case, script
variants, etc.)  (Can you explain in more detail here???)

Another aim of Omega is handling language typographical features
without explicit markup. For instance: German "ck, Spanish "rr,
Portuguese f{}i, Arabic ligatures, etc. Of course, vf can handle that,
but must I create several hundreds of vf files only to remove the fi
ligature? Omega tranlation processes can handle that very easily.

That's the reason Omega OTPs exist. Yannis and John disliked these two
approaches. They now avoid active characters as much as possible (eg
'~' is not active anymore), and also avoid pre-passes. Both solutions
were needed when I was working on FarsiTeX: I needed to pre-pass to do
contextual shaping, and I needed active Tatweels inserted between
letters to stretch them to fit the line of text. I don't need any of
them now.

2.3. Further Issues:

- Even with Unicode internally, one probably still needs what is
currently used exclusively, namely to have named symbols and other
complex objects.  This may be fine as long as these don't need
hyphenation and nongeneric kerning.

- How are combining characters handled, in particular when they
represent a glyph that has also its own Unicode slot?  The main
issue is hyphenation.  How do Unicode capable word processors handle
this?

- Unicode is still changing, especially with respect to math
characters.  Does this prevent us from getting correct basic
infrastructure in place?

- Requirements for non-European scripts that have not been adequately

3. Alternative Engines:

As explained above, the TeX engine has limited capabilities for
multilingual typesetting and requires some rather awkward workarounds
for non-English languages.  Omega with its internal Unicode
representation is certainly an alternative.  What is the current state
of Omega, what are potential problems, and are there other
possibilities?

- It appears that Omega uses a 16 bit internal representation.  Is
this a restriction that may cause problems later, when someone finds
needed glyphs are outside the 16 bit range?

- What is the general state of Omega's TeX compatibility?  For
example, would LaTeX and a majority of third party packages run
unchanged on top of Omega (with or without full Unicode support)?

- If the engine is under discussion, the new engine should be able to
provide long-time stability comparable to TeX.  So is the basic
infrastructure that Omega provides considered solid and general
enough for its purpose?

- Would the decision to move beyond TeX cause a feature explosion in
the engine that would be difficult to control?  On the other hand,
are there feature in e-TEX, NTS and friends that are deemed
essential or highly desirable, but are not provided by Omega?

4. Impact on Mainstream Usage:

What would be the impact of all this to Joe User who does nothing but

- Joe User must install new executables in addition to class and style
files when upgrading to LaTeX3.  It is likely that he (or she) won't
notice as contemporary software packaging will hide this detail.

- Possibly a minor performance hit due to 16 or 32 bit internal
characters.  On the other hand, current LaTeX font handling has some
pretty noticeable overhead in places (\boldsymbol in amsmath, for
example), so if those cases could be handled natively, there may
actually be an overall performance improvement.

- Could type a math paper without saying Schr\"o\-din\-ger all the
time.

- Won't need to think when receiving a strange .tex file from a friend
in China.

- Availability of different fonts may increase as they would typically
not need to be VF re-encoded.

5. Stability Issues:

5.1. User Interface Stability:

- Since UTF8 will work with plain ASCII, there should not be any
upgrade problem.  Other font encodings could still be explicitly
specified.

- It is important to make sure that reasonable old LaTeX files run
without problems (even if the output is not 100% pixel compatible)
to enable users to upgrade easily.

5.2. Producing Identical DVI Files:

How important is this?

- LaTeX is currently being used as an archival format (arXiv), so
there should not be unnecessary breakage.

- One the other hand, if maintaining strict compatibility at DVI level
requires awkward compromises or a significant increase in developer
time, this has to be weighted against the (finite) hassle of keeping
and compiling an old teTeX tarball for those special cases where
strict identity is essential.

6. Multilingual Support vs. Other Design Goals of LaTeX3:

LaTeX2e works pretty well as an authoring and communication tool for
technical and scientific writing.  Areas for intended improvement (very
sketchy right now...).

- Better Class/Package designer interface.
- Better font support???
- Internationalization???

7. "Soft Arguments":

- Leaving the well-known world of TeX causes fear and uncertainty.  In
particular, it is not clear what precisely should come after TeX,
and there is the danger of obsoleting a lot of past work.

- Judging from past release schedules, LaTeX will receive a major
upgrade about once every 10 years.  So if we wait until 2014 to get
state-of-the art international support, we may lose a lot of
potential users.

- Basing LaTeX on Omega poses a hen-and-egg problem that will not go
away automagically.  Omega will only become completely stable if
there is unequivocal support from the user interface community
(i.e. the LaTeX people) and LaTeX needs the Omega backend to become
a serious multilingual typesetting system.

- Unicode is currently receiving a lot of attention and publicity.  So
it may be advantageous to ride that wave, in particular as it seems
technically sound.

8. Summary:

- Unicode on TeX (default or optional): Too much of a mess, poor
performance, and probably difficult to get completely right without
invoking TeX as a Turing machine.

- Unicode on Omega (default but blindly compatible): Seems essentially
the right thing to do (no strong argument against), but still lots
of questions.

- Something else: Pie in the sky.