dongen skrev 2012-02-07 19.34:
> * Bruno Le Floch<[log in to unmask]> [2012-02-06 04:26:24 -0500]:
>
> Hi Bruno,
>
> I'm starting to understand the problem with expandable caching.
[snip]
> I think there's a general expandable solution to this. Let's assume we want
> to compute f( n ). We compute is by computing f2( {}, n ). The first argument
> is the cache of f2. The value that's returned by f2 is a pair (c,r), where c
> is the last cache of f2 and r the result of f( n ).
The problem with expandable caching is in _how_ one stores the cached data.
The elementary way of storing information in TeX is to make an assignment,
but an assignment is a command, and thus cannot be performed at expand-time.
The subset of TeX programming techniques that are available at
expand-only-time are quite unlike imperative programming, and also (AFAICT)
not so much supported by the LaTeX3 kernel at the moment. One illustration
of this is provided by the suggested solution to the "Mapping Functions
Versions for All and Some" that are still in the title this thread: it
involved setting a flag variable. Setting a flag is an assignment, so that
solution would not do for an "All" or "Some" predicate that had to be
evaluated at expand-time.
This does not mean it is impossible to do at expand-time, but one has to
employ a different set of programming techniques when doing it: mostly
techniques from functional programming, and when nothing else helps resort
to combinatory logic (as the equivalent and more traditional lambda calculus
requires a lambda operator, which again is not available at expand-time in
TeX). Since caches can be implemented in lambda calculus, one can make an
expand-time cache in TeX, but the details are not exactly easy to get right.
FWIW, making something expandable that could cache arbitrary amounts of data
was a motivating use-case when I set out to write that 2-3-tree package I've
mentioned earlier. It is feasible to use (or will be, once finished), but I
think the programming style will take many quite some time to get used to.
> This seems very doable and I think it's possible to do it in a time complexity
> that is at most O( c^2 ), where c is the number of things that have to be cached.
You're thinking recursions of bounded length here? Be aware that even with a
cache of fixed length it may be a nontrivial problem to write a TeX macro
that will access the right elements; one tends to run out of arguments (#9
is the last there is). Writing correct code that automatically defines the
necessary macro is even trickier.
> It's also possible to optimise the cache, in the sense that lookups for
> frequently looked up items are more efficient.
If computations are not restricted to expand-time, then any access is just
O(1), so no need to optimise. But if you want to be building the cache at
expand-time then you're into very deep waters.
> I'll try and implement this for a toy example. If it proves successful,
Considering your performance so far, a success at that would surprise me;
more likely you'll end up producing a piece of code that doesn't work as
intended and then asking everyone why it doesn't. Learn to walk before you
run. :-)
Lars Hellström
|