LATEX-L Archives

Mailing list for the LaTeX3 project

LATEX-L@LISTSERV.UNI-HEIDELBERG.DE

Options: Use Forum View

Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
Mailing list for the LaTeX3 project <[log in to unmask]>
Date:
Wed, 8 Feb 2012 18:42:03 +0000
Content-Type:
text/plain
Parts/Attachments:
text/plain (81 lines)
* Lars Hellström <[log in to unmask]> [2012-02-08 17:34:28 +0100]:

Hi Lars,

: The problem with expandable caching is in _how_ one stores the
: cached data. The elementary way of storing information in TeX is to
: make an assignment, but an assignment is a command, and thus cannot
: be performed at expand-time. The subset of TeX programming
: techniques that are available at expand-only-time are quite unlike
: imperative programming, and also (AFAICT) not so much supported by
: the LaTeX3 kernel at the moment. One illustration of this is
: provided by the suggested solution to the "Mapping Functions
: Versions for All and Some" that are still in the title this thread:
: it involved setting a flag variable. Setting a flag is an
: assignment, so that solution would not do for an "All" or "Some"
: predicate that had to be evaluated at expand-time.

Thanks. I've almost implemented a naive implementation of a fibonacci
macro that uses the ideas I explained in my email to Bruno. I think
it should be possible to _automate_ this kind of implementation for
similar computations. (Having to write this by hand is cumborsome
and prone to errors.)

: This does not mean it is impossible to do at expand-time, but one
: has to employ a different set of programming techniques when doing
: it: mostly techniques from functional programming, and when nothing
: else helps resort to combinatory logic (as the equivalent and more
: traditional lambda calculus requires a lambda operator, which again
: is not available at expand-time in TeX). Since caches can be
: implemented in lambda calculus, one can make an expand-time cache in
: TeX, but the details are not exactly easy to get right.

Thanks. I'm only starting to scratch at the surface that's called
expansion. As soon as April comes I'll study it in more detail. At
the moment, I'm mainly learning the expl3 API.

: FWIW, making something expandable that could cache arbitrary amounts
: of data was a motivating use-case when I set out to write that
: 2-3-tree package I've mentioned earlier. It is feasible to use (or
: will be, once finished), but I think the programming style will take
: many quite some time to get used to.

Or it may be possible to use code generation to take (simple) bits
of TeX and turn them into caching equivalents that use the package.

: >This seems very doable and I think it's possible to do it in a time complexity
: >that is at most O( c^2 ), where c is the number of things that have to be cached.
: 
: You're thinking recursions of bounded length here? Be aware that
: even with a cache of fixed length it may be a nontrivial problem to
: write a TeX macro that will access the right elements; one tends to
: run out of arguments (#9 is the last there is). Writing correct code
: that automatically defines the necessary macro is even trickier.
: 
: >It's also possible to optimise the cache, in the sense that lookups for
: >frequently looked up items are more efficient.
: 
: If computations are not restricted to expand-time, then any access
: is just O(1), so no need to optimise. But if you want to be building
: the cache at expand-time then you're into very deep waters.
: 
: >I'll try and implement this for a toy example. If it proves successful,
: 
: Considering your performance so far, a success at that would
: surprise me; more likely you'll end up producing a piece of code
: that doesn't work as intended and then asking everyone why it
: doesn't. Learn to walk before you run. :-)

The performance is pathetic, compared to an equivalent implementation in
Java.  I was surprised about howw much more slow it is. Hopefully caching
will improve it.

Anyway, I'll let you know when I have some more details. (It may even turn
out that you're right and that the complexity of implementation turns out
impossible to handle.)

Regards,


Marc van Dongen

ATOM RSS1 RSS2