I'd like to bump this thread (and CC it to sage-combinat)...I really do wonder why the current DyckWords generator uses so much memory, and why it seems to be slower than the obvious backtracking code.
I have also run into memory problems using the set partitions generator. Does anybody know what's going on? On Mon, 26 May 2008 22:37:49 +0900, Dan Drake wrote: > I'm using Mike Hansen's new backtracking code and it's great. With > basically zero effort, I got it to produce Dyck paths, and thought I'd > compare the speed with the built-in DyckWords. > > In some cases, the new code is much faster: > > sage: get_memory_usage() > 360.078125 > sage: n = 14 > sage: time iterlen(DyckPaths(2*n)) > CPU times: user 210.32 s, sys: 9.68 s, total: 220.00 s > Wall time: 220.07 > 2674440 > sage: get_memory_usage() > 360.078125 > sage: time iterlen(DyckWords(n)) > CPU times: user 387.99 s, sys: 3.83 s, total: 391.82 s > Wall time: 391.89 > 2674440 > sage: get_memory_usage() > 985.64453125 > > (iterlen is just a little function that counts the length of an > iterator; it and DyckPaths are below. Also, the 'n' and '2*n' is just in > how things get indexed.) > > I wanted to try some larger values of n, but I started hitting swap. > Does anyone know why DyckWords leaves so much memory used, while the new > backtracking stuff has no effect at all after it is done running? > > For the curious, here are some other timings: > > n: Words(n): Paths(2*n): > 10 1.00 1.74 > 11 3.60 4.74 > 12 14.08 16.91 > 13 63.21 60.73 > 14 391.89 220.07 > 15 X 800.35 > > iterlen(DyckWords(15)) didn't finish; after half an hour or so, the > sage-ipython process was taking up over 1.9 gigs of memory and I killed > it. (The machine I'm running this on has 2 gigs of RAM and 1 gig swap.) > Doing DyckPaths(30) never used more than 510 megs or so of memory. > > Any thoughts? > > Dan > > > ############################# > > from sage.combinat.backtrack import GenericBacktracker > > def iterlen(xs): > i =3D 0 > for x in xs: > i +=3D 1 > return i > > class DyckPaths(GenericBacktracker): > def __init__(self, n): > GenericBacktracker.__init__(self, [], (0, 0)) > self._n =3D n > > def _rec(self, path, state): > if is_odd(self._n): > return > > len, ht =3D state > > if len < self._n: > # if length is less than n, we need to keep building the path, so > # new length will be 1 longer, and we don't yield the path yet. > newlen =3D len + 1 > > # if we're not touching the x-axis, we can yield a path with a > # downstep at the end > if ht > 0: > yield path + [-1], (newlen, ht - 1), False > > # if the path isn't too high, it can also take an upstep > if ht < (self._n - len): > yield path + [1], (newlen, ht + 1), False > else: > # if length is n, set state to None so we stop trying to make new > # paths, and yield what we've got > yield path, None, True > > ############################# -- --- Dan Drake <[EMAIL PROTECTED]> ----- KAIST Department of Mathematical Sciences ------- http://math.kaist.ac.kr/~drake
signature.asc
Description: Digital signature