> ... it seems > that there should be an algorithmic solution which > extrapolates the available kerning information which > comes very (and for some fonts maybe even indistiguishably) > close to the optimum? Something like a poor man's > letterspace that's not so poor after all? The crucial limiting factor is this: in TeX it is very difficult to write *fully general* macros that pick up tokens one at a time and test them before execution. Your intuition has some merit, if you are willing that the letterspaced text should be subject to some strong restrictions: no { } characters, no accent commands, no conditional commands (\if... \else \fi), no macros that take arguments (such as \ref, \index, \cite, or further font changes ...). Then (a) If you are willing to accept those restrictions, suitable macros can be written without a great deal of work. But who will want to actually use the macros then? I would certainly not use them in a documentclass or package because they would be nothing but a minefield, waiting to explode with mysterious error messages in the face of any user who inadvertently uses one of the forbidden elements. (b) For each class of problem element, it is generally possible to make the macros handle that class, but only by multiplying the complexity and macro-writing time tenfold for each class, and dividing the runtime speed by ten, too. (OK, that might be an over-exaggeration, but not by much---and it will seem even less if you are the macro writer actually facing such a task :-) It doesn't do much good to handle two classes and leave two others unhandled: then the macros are still a minefield for unwary users, only the error messages turn up less often (e.g., Joe User spent 900 hours writing his thesis without any error messages and then the night before it's due he happens to add an accent command in a section title which happens to be subjected to letter-spacing by the documentclass. Surprise!). Michael Downes, [log in to unmask]