T T skrev: > Hi, > > I joined the list not so long ago with the main intension of lurking, > as I don't feel qualified enough for more technical discussions. This > thread, however, is more general and touches upon some very important > questions, so I decided to chime in for this one time. > > For those who don't know me, I'm mostly just another LaTeX user and > since recently also a member of the TeX Live development team. I'm > also quite interested in developments happening in LaTeX space. > > As I understand, the main criticism of LaTeX3 project given by J.F. is > the choice of TeX macro language for its implementation. Although the > issue of named vs numbered arguments (given as an example) seems to me > largely a nitpicking, I too wonder if the LaTeX3 team does not > prematurely put a cap on project's potential by using TeX macros for > all programming. > > Now, I don't want to start a discussion on programming languages and > their virtues, but rather I want to focus on one question: are there > any clear limitations of what can be accomplished with TeX macros in > the context of broadly taken document preparation? > > Of course TeX is Turing complete, yadda yadda, but I want to look at > this from a practical rather than theoretical point of view. > > Some features that I would like to see in LaTeX3 include: > * document model (a well defined data structure with programmatic > access to all document elements), My feeling is that this is not so likely, at least concerning the latter parts; LaTeX the reference implementation is going to remain a system which transforms the input document stream to typeset output. There will always be applications that do funny stuff for the sake of producing particular effects in the typeset output. What one could do is think about defining a "regular LaTeX3" document model (sitting roughly at FMi's level -1, I think) that is powerful enough for most parts of most documents, yet rigorous enough that something other than TeX can successfully parse the parts of the document adhering to it. One type of utility that could take advantage of this would be spelling checkers. > * ability to flexibly manipulate/transform those elements and operate > on the document as a whole, > * ability to store and reuse the results of those manipulations and > not only to proceed down the TeX guts to produce typeset output. > > Can the above be reasonably accomplished with TeX macro language? Tricky question. Yes, I think some such tasks can reasonably be accomplished within the TeX macro language (just consider the extent to which fontinst does other things than produce output), but I also suspect one would rather want to use some other kind of tool for most tasks of this kind. And I don't think it should be part of the LaTeX3 project to produce such a tool either, but it could be within scope to make life for such a tool easier. Concretely, one could equip \DeclareDocumentCommand with features to declare "types" (which would probably be elements of the set of non-terminals, if it is with respect to a BNF grammar) for the individual arguments of the command being defined. Supposing @{type} attaches such a declaration to the previous argument, a marked-up definition of the \linebreak command could look like \DeclareDocumentCommand{\linebreak}{ O{4}@{number} }... and a marked-up \makebox could be \DeclareDocumentCommand{\makebox}{ o @{dimen-with-naturals} o @{box-pos-hor} m @{horizontal material} }... By making a suitable setting, one could then tell \DeclareDocumentCommand to output such type mark-up to an external file, where other tools could pick it up. (The idea here is to make it so that if you can typeset a document, then you should also have enough data installed to tell other tools how to parse it, provided of course that all package authors were careful to mark up their command declarations.) Other bits and pieces are needed, e.g. it also needs to be specified for each declared command where that can be used, but I think the general idea is clear. > In another thread (subject: xparse) a possibility of using some kind > of a preprocessor was mentioned to do the complex data processing > prior to the typesetting step. Perhaps not so much "complex" as "convenient". One way to look at xparse is that it equips LaTeX3 with a macro mechanism that is "more powerful" (for the non-wizard) than what Plain TeX provides. LaTeX2e's \newcommand added the ability to have one optional argument first, and the original xparse continued along this line to add more argument types. Argument processing adds another kind of mechanism (which can be seen for example in MetaFont), namely to "evaluate" an actual argument before passing it on to the body. It is rather far from operations on DOM trees though, if that is what you were thinking about. Lars Hellström