At 09:50 AM 99/01/05 , Michael J. Downes wrote:
>Historically there has not been much leeway to work on such output
>routines because two or three complex pages (plus, perhaps, some
>pending figures) can use up 256K of main memory and there is no way to
>have the output routine check the current memory usage to decide if
>there is enough room to consider more material. Lacking such a check,
>an output routine that considers two pages or four pages at a time
>instead of one is much more likely to create problems by simply
>failing in the middle of a job with a fatal TeX out-of-memory error
>and leaving an incomplete .dvi file.
This particular problem goes away with TeX's that have dynamic
>If you want to handle floating objects like footnotes, figures,
>tables, the algorithm can easily become infinitely complicated.
>I think it would be interesting to try to handle four pages at a time
>(two two-page spreads; try to optimize the first one with the ability
>to borrow lines from the second one). But I don't think I would want
>to go beyond that.
By the way
Given the rapid increase in computer speed and memory, we will
be able to do things for which TeX is even less well suited than
what it is being used for now!
The dark side of rapid hardware progress is that it makes evolutionary,
incremental kludging easier than starting over. Witness MS bloatware.
Y&Y, Inc. http://www.YandY.com/news.htm mailto:[log in to unmask]