so I posted a question which Marc answered: gsc/gcc will *not* optimize away all possible tail recursion problems! Don't let the recursion depth get above 1000 or 10000!
I think he said, if the depth of *non*-tail recursion (i.e. real recursion, not iteration) goes above ~10000 it will start to allocate call frames on the heap and thus get slower.
Right, thanks for correcting my dumb "typo".
Regarding tail calls, to my knowledge, Gambit always handles them correctly (unless if you intentionally switch off proper-tail-calls (only effective in the interpreter) of course).
Yeah. Didn't know about about switching off proper-tail-calls.
(He also said, that if you declare a big inlining-limit, the compiler will partly unroll the recursion code and running it will thus allocate less space for the call frames, which might have had an effect on the speed of your first attempt.)
Oh, thanks, I didn't understand that part of Marc's reply! The 2 ways that Marc & Germaine helped me was by handing me the inline code (declare (standard-bindings) (fixnum) (not safe) (run-time-bindings) (inline) (inlining-limit 1000) (block)) and by profiling my code, which showed that trees might be good.
So using "loop" as name in your named let looks misleading as it's not an iteration.
Yeah, good point. But I knew Poly->Tree wasn't tail-recursive, I posted that yesterday even. Thanks for your tips:
Maybe I can give some other quick help:
- I think some algorithms really need recursion (and I think especially trees)
I was wondering about that. OK, I think all the non-tail action in my program has to do with trees. Is there a good reference for tail-recursive tree functions? I need to make trees, and merge them. The thing I worry about is something you said in a different context:
But that only moves from using call frames to using cons cells, so only makes a small linear difference if at all.
Yeah, see I understand why my non-tail program bombed that was adding up 1s, I had this huge stack, which in my mind means I wrote down a huge expression (+ 1 (+ 1 (+ 1 (....... 1)))) Well, you gotta allocate space to store this huge expression! And the tail-recursive program didn't do any such thing: it just looped sum <- sum + 1, that doesn't take any space. So a better test (which I haven't done) would be to first build a huge list (1 1 1 ....1) and then tail-recursively sum the list. Well, that huge list is gonna allocate a lotta space!
And that's how it goes for my (presently non-tail) tree functions: Everything on the stack gets put into the answer. That is, my tree functions return a huge tree.
Maybe you could reorganize your problem to only need part of the tree in memory? Maybe using lazy evaluation (using streams) helps? (I can't say without looking at the problem in detail)
I'd be surprised if these tips would help. Uh, what I'm doing isn't that hard or mathematically sophisticated. From my README file:
Defines simple functions for 3datatypes, Indeterminate = Indet = Z Monomials = Mono = (listof Z) Polynomials = Poly = (listof Mono) Tags = (cons Mono (union Mono 'cycle)) Tree ::= empty | (listof (n Tree))
I use the convention that Z = the datatype of integers, N = nonnegative integers, the natural numbers plus 0.
So Mono-1 = Poly-0 = '(), etc. A Monomial is often called a term. We call a term (a b c ...) admissible if b <= 2a, c <= 2b, etc. A term has an s-degree & a t-degree, calculated by Mono-s & Mono-t. The term (a_1 ... a_s) has s-degree s, and t-degree a_1 + ... + a_s + s.
Crucial is the left-lexicographical order, the dictionary order, simplified as we only apply it to terms (words) of the same length.
(a_1 ... a_s) < (b_1 ... b_s) iff either a_1 < b_1, or else a_1 = b_1, and (a_2 ... a_s) < (b_2 ... b_s)
A polynomial (i.e. a list of terms = monomials) is sorted if it's sorted in the left-lex order (in descending order). A polynomial is simplified if it doesn't have any repeated terms. Then Poly->Tree turns a sorted simplified polynomial into a tree, which is supposed to make it easier to find stuff. merge-tree then merges 2 such trees.