On 7/28/07, Jonathan Bober <[EMAIL PROTECTED]> wrote:
> I've been working on a from-scratch implementation (attached). Right now
> it runs a faster than Ralf Stephan's part.c, but not as fast as we would
> like. (And it seems to work, although I can't guarantee that right
> now.)

Great work!  Many many thanks.  Could you resend (to me at least)
your part.cc but with a copyright statement (e.g., a GPL header)?  Thanks.
I'd like to include it in SAGE-2.7.2 (asap) as an optional non-default
way to compute number_of_partitions.

This will put your code under revision control and making it easier for
other people to contribute to.   Once we're confident that
your program is producing right answers, we can switch it over
to be the default number_of_partitions functions.

> On my Core Duo 1.8 ghz , it computes p(10^8) in about 17 seconds,
> compared to about 70 seconds for the part.c previously posted. However,
> it took about 270 seconds for p(10^9). (I don't have Mathematica, so I
> can't give that comparison.) On the other hand, I don't know how much
> faster sage.math is than my laptop, but since it is 64 bit, it might run
> the code much faster.

The wall times for Mathematica 6.0 doing this calculation on my 2.33Ghz laptop
under 32-bit linux (via Parallels) are:
   10^8       12 seconds
   10^9       95 seconds

On sage.math again (via mathematica 5.2):
   10^8       10.88 seconds    (wall time)
   10^9       69.80 seconds (wall time)

The times for your code on the same system are:
   10^7       wall: 1.2 seconds;  cpu: 1.096
   10^8       wall:16 seconds;    cpu: 10.705
   10^9       wall: 213.383;        cpu: 145.673 seconds

On sage.math with your program:
   10^7      cpu time: 1.124 seconds   (cpu = user + sys; mostly user
in each case)
   10^8      cpu time: 13.465 seconds
   10^9      cpu time:  160.932 seconds

So you're basically within a factor of 2 of Mathematica, and I think you
easily now have the fastest open source implementation of counting
the number of partitions of an integer.

> I think that there is still a good amount of optimization that can be
> done to make this faster. Some things that might a lot help include
> better error estimates for the tail end of the series (if such estimates
> exist) and, in general, going over everything carefully to try to make
> sure that no unneeded precision is ever used. (Once the code decides
> that no more than 53 bits of precision are needed, it switches over to
> computing with C doubles, and the rest of the computation finishes
> "instantly".)
>
> Note that this is C++ code, but it could be switched to pure C quite
> easily, since is doesn't actually use any real C++.

I'm completely fine with C++ code.

 -- William

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~----------~----~----~----~------~----~------~--~---

Reply via email to