Print

Print


Javier,

 > Apparently, the sample always loads the ot1 variants, no matter
 > which encoging is selected. I think that you mean
 >
 > \DeclareRobustCommand\selectfont    ...etc.
 >
 > instead of
 >
 > \DeclareRobustCommand\Xselectfont    ...etc.

hmmm, yes. plead guilty on that one. thought i had tested it after making
final changes from the private version to the one overwriting the nfss
primitives, but ...

 > But anyway...

yes, anyway. please everybody: change the above in the sample if you try it.

 > As you know, I was experinmenting a couple of month ago with this idea
 > in my draft for Lambda (the multilingual environment for Omega).

was actually not aware of that (that you experimented with multiple encodings)

 > However, I found several problems. For example:
 > - if I say \fontencoding{T1,OT1} we will get t1cmr which points to another
 >   font (ec) and not to a t1 encoded cmr,

perhaps we should have that as a separate debate, but the ec fonts where
supposed to be extended cmr at least this is what they started out to be. I
know that Joerg in the end did one or the other change but at least the
original intention was that those fonts should have been indistinguishable on
their common subset of glyphs (and this is why in LaTeX they are considered
both family cm).

if a lot of people think this is not the case then this opens an important
discussion about what to do with them, but it doesn't seem to me a criticism
or a problem with the general approach taken in my sample code.

 > - more importantly, we lost the control of the final result, because
 >   a faked accented letter may be not exactly the same as an actual composite
 >   letter. It so happens that no TeX installations are the same and perhaps
 >   a different font in selected in another system just because a file has not
 >   been installed.

but this is true already, isn't it? as of today a formatting of latex document
depends on a number of factors, one of which is the available fonts. so 100%
output compatibility is only achieved if you

 - have an identical set of fd files
 - have identical metrics (this was especially with PostScript fonts an issue
   in the past)
 - actually have the fonts installed that the fd files and metrics are
   pointing to.

so i don't see that the situation would get a different quality. Agreed, with
more possibilities you are likely (potentially at least) to get a wider range
of results; but on the other hand either you let the system take
responsibility (which means trying to find fonts suitable for the intended
script (glyph collection)) or you force this selection onto the user. And we
know that the latter is unsatisfactory as well since not many people do
understand why they need to say \usepackage[...]{fontenc} etc, or rightly feel
that they should not have to worry about it.

i don't really see that there is a chance in the world to achieve 100% output
compatibility between sites unless you enforce a far more rigid scheme (which
isn't really possible). I mean you would need to define a far bigger set of
files to be untouchable and required and you need to also enforce and you
don't have additional files that might change your setup.

if I now write a document and specify \fontencoding{T1} it might not run at
all at a site not having T1 fonts (though such a site is in theory not allowed
to exist) or it might switch to ec as default fonts, while with a range of
encodings i would get a result "closer" to the intended output.


also please note that my code (after your fix:-) does both: you can still
specify a single encoding and then only that encoding will get used ie you get
the situation as it is now where the user has total control (assuming that fd
files are the same).

 > Despite that, I think that is the right way, and I'm studying how to solve
 > these issues. Any ideas?

do you have any other issues than the two above? you mentioned them as "for
example".

cheers
frank