LATEX-L Archives

Mailing list for the LaTeX3 project

LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Condense Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Sender:
Mailing list for the LaTeX3 project <[log in to unmask]>
Subject:
From:
"William F. Hammond" <[log in to unmask]>
Date:
Wed, 9 Dec 1998 20:41:01 -0500
Reply-To:
Mailing list for the LaTeX3 project <[log in to unmask]>
Parts/Attachments:
text/plain (83 lines)
Marcel --

You write to the list:

> and power of LaTeX.  Two reasons:
>
> - The LaTeX input syntax (or whatever we take that for) is a
. . .
> - LaTeX is hackable. While this is certainly opposed to the goals of a


You think that dtd's for authoring and processors driven by author-supplied
collections of little functions are not hackable ? ? ?


and you write:
> "William F. Hammond" wrote:
> > What you describe is the conscious goal of my GELLMU, which is found at
> >
> >                http://math.albany.edu/~hammond/gellmu/  ,
>
> I looked at your pages, but I cannot see the advantage of "LaTeX-like"
> vs. "subset of TeX/LaTeX". When a document is authored, it is typeset
> many many times,   ...


  ^^^^^^^^^^^^^^^  for proof reading, I assume?  OK, so you process
only from markup to dvi for proof reading.  Along the way the gellmu
to-sgml processor debugs your syntax and after that the sgml parser
checks if your document is marked up logically, pointing out the exact
problem.

For that matter, processing both for dvi and html and then validating
the html takes about a second longer and provides a slight extra
verification of document logical accuracy against a portable public
format.

(Of course, if you never want a different form and if you know now
that you are never going to want "smart" documents, then use LaTeX
only.  GELLMU, SGML, and XML only make sense if you are interested in
multiple outputs.)


>            ...   while it is converted to other formats very few
> times. So it seems counterintuitive to require preprocessing for the
> most frequently occurring task. Moreover, a large percentage of
> existing LaTeX documents are portable or could be make portable by
> trivial changes, so this base of documents would be lost for no
> apparent gain. Last, the loss of hackability problem also applies
> here.


In practice *today* only Lamport LaTeX has a chance of being viewed
as portable.  LaTeX2E appears to lack a wide enough distribution at
this point.

What is legal TeX is an even broader category than what is legal LaTeX.

And is DVI really portable?  Only if one restricts to fonts (including
glyph sets) that may be assumed to have universal distribution to all
dvi-equipped sites.  Even so, a faculty member in a math or physics
department has the problem that his chairman may not know how to print
dvi, may not have a web browser that is configured to use dvi, and his
dean is even less capable with dvi.  That is why one sometimes sees
publications on the web in pdf and not in dvi.  :-(

University graduates in the United States who acquire jobs in small
2 year or 4 year undergraduate institutions sometimes find that they
enter a community where nobody else uses TeX or LaTeX.  It is likely,
however, in the near future that everybody will have XML engines for
screen viewing and printing.

If that does not happen, then I plan to put up my papers with ugly,
but fully decipherable, math in ordinary HTML because some of my
audience can view HTML but cannot view PDF.

Portability aside my guess is that, as a theoretical matter, there
exist no _fully_ _automatic_ _failsafe_ _translations_ from legal
Lamport LaTeX to _any_ format that is not DVI or constructible from
DVI (e.g., PostScript and PDF).

                                   -- Bill

ATOM RSS1 RSS2