LATEX-L Archives

Mailing list for the LaTeX3 project

LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

Options: Use Forum View

Use Monospaced Font
Show HTML Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Michael John Downes <[log in to unmask]>
Reply To:
Mailing list for the LaTeX3 project <[log in to unmask]>
Date:
Tue, 13 Feb 2001 14:44:46 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (24 lines)
Hans Aberg <[log in to unmask]> writes:

> >> The <not yet gulped up ASCII (or 8-bit) buffer is read converted into
> >> tokens at need.
> >
> >TeX reads into the buffer one line at a time.
>
> How can this be true? What happens if a command in the middle of a line
> changes the catcodes, or contains a macro that expands to a \input
> <filename>?

Sorry, I didn't use the terminology very well. TeX input first goes into
a string buffer, one line at a time. This string buffer is the only
place where TeX deals with ASCII chars as input; all other "input
streams" are streams of tokens. Tokenization occurs by scanning
substrings from this string buffer and adding the corresponding token to
the current input stream (which if we call it a "buffer", is a different
buffer, not the one that contains simple 8-bit characters as first read
from a file).

If you get an error "TeX capacity exceeded: buffer size" it means
that a line of the input file was too long to be read into the string
buffer.

ATOM RSS1 RSS2