Sender: |
|
Subject: |
|
From: |
|
Date: |
Sun, 13 Apr 1997 12:16:20 +0200 |
Reply-To: |
|
Parts/Attachments: |
|
|
>> ... it seems
>> that there should be an algorithmic solution which
>> extrapolates the available kerning information which
>> comes very (and for some fonts maybe even indistiguishably)
>> close to the optimum? Something like a poor man's
>> letterspace that's not so poor after all?
>
>The crucial limiting factor is this: in TeX it is very difficult to
>write *fully general* macros that pick up tokens one at a time and test
>them before execution. Your intuition has some merit, if you are willing
>that the letterspaced text should be subject to some strong
>restrictions: no { } characters, no accent commands, no conditional
>commands (\if... \else \fi), no macros that take arguments (such as
>\ref, \index, \cite, or further font changes ...).
I think this is the same problem that Frank Mittelbach encountered in the
"shortref" discussion.
Actually, the restrictions are not so severe: It is possible to parse
special token like "{", "}", space, etc.
In fact, I used this to remove that restriction of the definition macro I
made that produces conditional macros, namely, the restriction that it
cannot be enclosed in { }. I wrote a macro that picks down the argument to
the next "}" or \par, whatever comes first, making sure that what is picked
down is being unaltered (thus putting back "{}" or spaces that may have
been stripped out). This argument can subsequently be repeatedly examined
in non-deterministic parsing.
Nevertheless, you do not get truly general macros by doing such parsing.
One problem is lack of proper macro expansion control.
Hans Aberg
|
|
|