LISTSERV mailing list manager LISTSERV 16.0

Help for LATEX-L Archives


LATEX-L Archives

LATEX-L Archives


LATEX-L@LISTSERV.UNI-HEIDELBERG.DE


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

LATEX-L Home

LATEX-L Home

LATEX-L  March 1997

LATEX-L March 1997

Subject:

Re: Shortref mechanism

From:

"Randolph J. Herber" <[log in to unmask]>

Reply-To:

Mailing list for the LaTeX3 project <[log in to unmask]>

Date:

Mon, 3 Mar 1997 11:41:34 -0600

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (182 lines)

The following header lines retained to affect attribution:
|Date: Mon, 03 Mar 1997 12:14:03 +0100
|From: [log in to unmask] (Hans Aberg)
|Subject: Re: Shortref mechanism
|To: Mailing list for the LaTeX3 project <[log in to unmask]>
|Cc: "Randolph J. Herber" <[log in to unmask]>

|"Randolph J. Herber" <[log in to unmask]> writes:

|>|  I cannot follow the details in your reasoning, but I can note that with
|>|deterministic parsing, the method generally used in LaTeX, conditional
|> ^^^^^^^^^^^^^ ^^^^^^^                                      ^^^^^^^^^^^
|>|parsing have such limits.
|> ^^^^^^^
|>|  But with non-deterministic parsing more general things can be done:
|>            ^^^^^^^^^^^^^^^^^
|>|  For example, I just made a definition command that can produce commands
|>|having optional arguments; in this general approach, I had to switch from
|>|LaTeX style deterministic parsing to non-deterministic parsing.

|>To someone that has written several small compilers and has studied automata
|>theory at the doctorate level, your word choice as high-lighted above is
|>quite jarring.  By using a power automaton, a non-deterministic automaton
|>can be reduced to a deterministic automaton.  Therefore, one does not gain
|>any power of expression by using a non-deterministic automaton, rather one
|>only gains compaction of the description.

        Your following comments are not pertinent to my comments above.

        You may feel that you are making up nomenclature to describe your
        proposed algorithms.  But, in fact, you are using nomenclature
        with already assigned meaning in the field of computer language
        processing.

        Furthermore, you ``added insult to injury'' by deleting my
        provision of the proper nomenclature from the field of computer
        language processing that does pertain to your proposed changes
        to TeX's handling of its input, to wit:


                I believe that what you intended is the distinction of
                context free and context sensitive languages.  From
                what I have read in the TeX book, the tokenizer of TeX
                is context sensitive with a single character look-ahead
                and the TeX language based on the recognized tokens is
                context free.

                It is a significant change in the behavior of the TeX
                language to change it from being context free to being
                context sensitive.  But, it may be a necessary change.
                Most modern computer languages are context sensitive
                with a single token look ahead.  A few look ahead two
                tokens in some situations.  I imagine that some look
                ahead three tokens.  Parser generators for single token
                look ahead readily are availed.

                What you are proposing is a change from zero token look
                ahead to one token look ahead.

        Please. would you use the proper nomenclature?

        The pairing from your improper nomenclature to what I believe
        is the pertinent nomenclature for what you are attempting to
        discuss is:

                deterministic           ==>             context free
                non-deterministic       ==>             context sensitive

        Computer languages have both structure, i.e. syntax, and meaning,
        i.e. semantics.  Computer languages that are studied for their
        syntactical properties might not have associated semantics.  All
        others do have semantics.  Many computer languages are context
        free in their syntax eventhough they have semantics and therefore
        context among the language elements because of those semantics.

        I believe that TeX (with the exception of the one character
        look ahead in its tokenizer which is used to locate the
        termination of tokens) is context free __in its syntax.__
        This does not mean that TeX does not have semantics nor does
        it mean that these semantic elements do not have context
        among the various semantic elements.

        I believe that Frank Mittelbach's point and position (not
        ``problem,'' as you say) is that changing TeX from a context
        free to a context sensitive syntax (grammar, if you wish)
        is too large of a change to be considered.

|  This reasoning would be true in any sufficiently general purpose, but
|TeX is not such a language (or it is unknown if it is).

        My observations above pertain to all languages which have syntax.
        TeX is a language which has syntax.  Therefore, it is such a language.

|  The second thing is that, even though something may be theoretically
|possible, it may be practically impossible, because you simply do not have
|time to both  doing that implemntation, and pay your bills.

        This is Frank Mittelbach's point as I understand it.

|  The third thing one must consider, is that a computer language is not
|only used to manipulate logical data, but logical data that has a semantic
|interpreation attached to it. Any logical transformation must keep track of
|that semantic interpretation, and this is related to the practicality
|question, I guess.

        At the syntax level of language processing, the semantics do
        not pertain.  At the semantics level of language processing,
        semantics is the entire purpose of the processing.  Any
        compiler or interpreter that does any semantics processing
        must handle the semantics processing that is dictated by the
        specifications indicated by the semantics associated with
        the syntactical elements.

|  With TeX the problem is this:

|  You have a variable #1 equal to some parameter text, say ##1##2.
|When #1 pick up an argument, in the first pass, an argument of the form
|    {section}{theorem}
|will be transformed into
|    sectiontheorem,
|so, when writing an deterministic parser by hand, on puts back the argument
|to the next command as {##1}{##2}, say if you want to put it back all. Now,
|working in this generality, there is no obvious way of transforming
|    #1 --> #1_new
|by a command doing
|    ##1##2 --> {##1}{##2}
|implicitly.

        I have written several compilers and know how to process
        context sensitive grammars.

|  By reverting to non-deterministic parsing, one can get around this
|problem, by first picking up some text that surely contains the original
|##1##2, and then sending this original text to the next command, instead of
|the partially parsed by #1 (which may be corrupted).

        This passing along, unchanged, those syntactical elements which
        have been determined by the grammar to belong to following
        elements is part of the processing that occurs in a context
        sensitive parser.

|  But this does not solve Frank Mittelbach's problem, as he pointed out.

        Unless you consider Frank Mittelbach's lack of interest in
        redesigning or reimplementing or a lack of resources to redesign
        and reimplement TeX's syntax processing as ``Frank Mittelbach's
        problem,'' Frank Mittelbach does not have a problem here.  I do
        not have a problem here; computer languages are a major portion
        of my education and work.

|>I believe that what you intended is the distinction of context free and
|>context sensitive languages.  From what I have read in the TeX book, the
|>tokenizer of TeX is context sensitive with a single character look-ahead
|>and the TeX language based on the recognized tokens is context free.

|  TeX is highly context sensitive, and this is much of the point with TeX:
|Each environment or grouping has its own set of local variables, which can
|be used to change the context rather radically. This is unrelated to the
|stuff I discussed above.

        Please read my comments above.  There is a major, significant
        difference between syntax and semantics.  I do not deny that
        TeX is quite sensitive to the semantic context of the material
        it processes.  It would not be useful if it were not so.  This
        does not prevent TeX from having a context free grammar.

|  Hans Aberg

        Deciding whether TeX should have a context free or a context
        sensitive grammar is an appropriate topic for this forum.

        Since context sensitive grammars tend to be more complex and
        to use more computer resources to process, I believe the
        TeX developers will not change the grammar of TeX in such a
        way as to make TeX's grammar context sensitive.

Randolph J. Herber, [log in to unmask], +1 630 840 2966,
CD/OSS/CDF CDF-PK-149O Mail Stop 234
Fermilab, Kirk & Pine Rds., P.O. Box 500, Batavia, IL 60190-0500.
(Speaking for myself and not for US, US DOE, FNAL nor URA.)
(Product, trade, or service marks herein belong to their respective owners.)
N 41 50 26.3 W 88 14 54.4 and altitude 700' approximately, WGS84 datum.

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

July 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
June 2018
May 2018
April 2018
February 2018
January 2018
December 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
July 2016
April 2016
March 2016
February 2016
January 2016
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
September 2012
August 2012
July 2012
June 2012
May 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
September 2007
August 2007
June 2007
May 2007
March 2007
December 2006
November 2006
October 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
November 2005
October 2005
September 2005
August 2005
May 2005
April 2005
March 2005
November 2004
October 2004
August 2004
July 2004
May 2004
April 2004
March 2004
February 2004
January 2004
December 2003
October 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
October 2002
September 2002
August 2002
July 2002
June 2002
March 2002
December 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998
August 1998
July 1998
June 1998
May 1998
April 1998
March 1998
February 1998
January 1998
December 1997
November 1997
October 1997
September 1997
August 1997
July 1997
June 1997
May 1997
April 1997
March 1997
February 1997
January 1997
December 1996

ATOM RSS1 RSS2



LISTSERV.UNI-HEIDELBERG.DE

Universität Heidelberg | Impressum | Datenschutzerklärung

CataList Email List Search Powered by the LISTSERV Email List Manager