LATEX-L Archives

Mailing list for the LaTeX3 project

LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Frank Mittelbach <[log in to unmask]>
Reply To:
Mailing list for the LaTeX3 project <[log in to unmask]>
Date:
Sun, 11 Feb 2001 16:30:34 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (130 lines)
I think bringing UTF8 into the debate is an important idea (Karsten already
remarked on the existence of some support for it). In this mail I like to
explore the ideas a bit further on whether or not something like UTF8 or
Unicode would be suitable, say, for LaTeX's internal character
representation. I'm saying LaTeX's not TeX's, mind.

TeX is 7bit with a parser that accepts 8bit but doesn't by default gives it
any meaning. On the other hand Omega is 16bit (or more these days?) and could
be viewed as internally using something like Unicode for representation.

LaTeX might want to live on both or either of them. so its internal character
representation has to be independent of the low-level representation in the
formatter.

As a recall: when i speak about LaTeX's internal character representation I
mean the way LaTeX internally passes characters around (as long as i doesn't
do typesetting). This representation is 7bit and consists of the visible ASCII
(which is represented by itself, eg A as "A") and of anything else which is
represented as, what is sometimes referred to as "font encoding specific
commands". These are things like \"a or \textyen, etc (right now roughly 900+
are defined). These font encoding specific commands might look like TeX
commands but with respect to the internal representation you better view them
as abstract names for characters as they will get passed around unchanged, eg
in marks or when written to files etc.

Only when finally something is getting typeset they will get associated with
font slot positions or with complicated maneuvers to position accents above or
below other characters etc.

you find the concepts and ideas behind this being described in a talk i gave
in Brno which can be found at http://www.latex-project.org in the article
section.

====================

Now for what Marcel wrote as a summary:

 > I am aware that some of these demands cannot really be met within
 > Knuthian TeX, but it seems LaTeX3 is prepared to eventually go beyond
 > TeX.  So it may be useful to define a minimal set of required
 > extensions/changes, as this issue could be a major roadblock to
 > enlarging the developer base.  For example, is there much motivation
 > for anybody to clean up the hyphenation mess before a clean long-term
 > solution (not just a work-around) is agreed on?

The LaTeX internal character representation is a 7bit representation not an
8bit one as UTF8. As such it is far less likely to be mangled by incorrect
conversion if files are exchanged between different platforms. I have yet to
see that UTF8 text (without taking precaution and externally announcing that a
file is in UTF8) is really properly handled by any OS platform. Is it?


however, there is also the following question:

 wouldn't it be better if the internal LaTeX representation would be Unicode
 in one or the other flavor?

in other words, instead of using \"a as the representation for umlaut-a use
something like

   \unicode{00e4}
or \uc00e4        % (as a command)
or \utfviii{...}

note that i deliberately had something start with a \ here. why is
this needed? because you need to get back into control at various points
and this is only possible if the whole construct can be viewed as a
command as far as the underlying formatter is concerned.
Using Omega this could probably handled differently but will have to
perform reasonably on TeX as well so i don't see any other suitable way to
present the internal form. Also with TeX visible ASCII is basically forced to
be represented by itself which is another restriction.

=========================================================

now what would be the advantages/disadvantages of the above approach?

 - clary the above approach will give a better naming scheme since unicode is
   an accepted standard and as such well-defined.

 - however, not clear is that the resulting names are easier to read, eg
   \unicode{00e4} viz \"a.

 - with intermediate forms like data written to files this could be a pain and
   people in Russia, for example, already have this problem when they see
   something like \cyr\CYRA\cyrn\cyrn\cyro\cyrt\cyra\cyrc\cyri\cyrya.  In case
   of unicode as the internal representation this would be true for all
   languages (except English) while currently the Latin based ones are still
   basically okay.

 - the current latex internal representation is richer than unicode for good
   or worse, eg \" is defined individually as representation for accenting the
   next char, which means that anything \"<base-char-in-the-internal-reps> is
   automatically also a member of it, eg \"g.

 - the latter point could be considered bad since it allows to produce
   characters not in unicode but independently of what you feel about that the
   fact itself as consequences when defining mappings for font
   encodings. right now, one specifies the accents, ie \DeclareTextAccent\"
   and for those glyphs that exists as composites one also specifies the
   composite, eg \DeclareTextComposite{\"}{T1}{a}{...}
   With the unicode approach as the internal representation there would be
   an atomic form ie  \unicode{00e4} describing umlaut-a so if that has no
   representation in a font, eg in OT1, then one would need to define for each
   combination the result.

 - anything else? i don't really think so and this mail is already getting
   rather long :-)

so how does this all balance? i guess the first point is quite important and
helpful since it also means that translating unicode based documents into the
internal form gets rather trivial and the strange set of names within current
LaTeX internal character representation (all of which are basically historical
accidents and thus without much structure) is clearly far from optimal.

But does it otherwise currently actually provides any advantage over the
current situation? (other than better hiding that we couldn't deal with 99% of
the unicode characters if they would appear in a document)

in 1992/3 when we worked on shaping the ideas of the LaTeX internal
representation we actually did discuss similar ideas but back then abandoned
them because of resource constraints (in the software). Machines are nowadays
bigger and faster so this isn't really much of an argument there.

So... time for another attempt?

comments?

frank

ATOM RSS1 RSS2