From: "Jonathan Bober" <[EMAIL PROTECTED]>
> It's not quite ready for inclusion, but, eventually, maybe Bill Hart's
> prediction about it only taking 10 seconds to compute p(10^9) on
> sage.math will come true.
That's fantastic! Great work!
Alec
--~--~-~--~~~---~--~
Since you've extended this thread some more, I'll mention that the
current version of the code that I am working on runs a little bit more
than twice as fast as that version. It's posted at
http://www.math.lsa.umich.edu/~bober/partitions_c.cc
if anyone is interested in testing it. (But don't was
From: "William Stein" <[EMAIL PROTECTED]>
> Thanks. I've fixed this for sage >= 2.7.3 in the attached patch.
Just upgraded SAGE to 2.7.3 and number_of_partitions works twice as fast as
in 2.7.2 with algorithm='bober' for 10^9 - it took 112 sec in 2.7.2 and only
56 sec in 2.7.3. This is in Ubu
That's interesting. I haven't done any speed comparisons. But it seems
impossible that a carefully coded fixed-precision library should not
be faster than a multi-precision library.
Did you use double-doubles when you got below 106 bits?
I *have* made the assumption here that these guys know how
On 8/2/07, Alec Mihailovs <[EMAIL PROTECTED]> wrote:
> > It uses long doubles now when then precision is small enough (and then,
> > later, just doubles like before), and the speedup is significant.
>
> By the way, I just installed SAGE 2.7.2 (from source) on my Ubuntu 7.04
> system, with AMD Athlo
From: "Jonathan Bober" <[EMAIL PROTECTED]>
>
> Incidentally, I should have mentioned here that I submitted a patch for
> version .4, and also updated it at
>
> http://www.math.lsa.umich.edu/~bober/partitions_c.cc
>
> It uses long doubles now when then precision is small enough (and then,
> later,
On Aug 1, 2007, at 1:26 PM, Jonathan Bober wrote:
> On Wed, 2007-08-01 at 12:29 -0700, Justin C. Walker wrote:
[snip]
> Since the time with the self-compiled code is so similar to the time
> with the sage-compiled code, I would guess that you are linking to the
> sage mpfr library, which wasn't c
Incidentally, I should have mentioned here that I submitted a patch for
version .4, and also updated it at
http://www.math.lsa.umich.edu/~bober/partitions_c.cc
It uses long doubles now when then precision is small enough (and then,
later, just doubles like before), and the speedup is significant
On Tue, 2007-07-31 at 14:24 -0700, Bill Hart wrote:
> I do highly recommend this quad double library by the way. And they've
> implemented all manor of transcendental functions too!! The quad-
> doubles would give you 206 bits, even on your machine.
>
> Bill.
> URLs: http://sage.scipy.org/sage/ a
On Wed, 2007-08-01 at 12:29 -0700, Justin C. Walker wrote:
>
> On Jul 31, 2007, at 10:54 PM, Jonathan Bober wrote:
> > On Tue, 2007-07-31 at 22:16 -0700, Justin C. Walker wrote:
> >> On Jul 31, 2007, at 18:36 , William Stein wrote:
> [snip]
> > That is puzzling. Are you sure that you have the lat
On Aug 1, 2007, at 12:29 PM, Justin C. Walker wrote:
>
>
> On Jul 31, 2007, at 10:54 PM, Jonathan Bober wrote:
>> On Tue, 2007-07-31 at 22:16 -0700, Justin C. Walker wrote:
>>> On Jul 31, 2007, at 18:36 , William Stein wrote:
> [snip]
>> That is puzzling. Are you sure that you have the latest ve
On Jul 31, 2007, at 10:54 PM, Jonathan Bober wrote:
> On Tue, 2007-07-31 at 22:16 -0700, Justin C. Walker wrote:
>> On Jul 31, 2007, at 18:36 , William Stein wrote:
[snip]
> That is puzzling. Are you sure that you have the latest version of the
> code?
I downloaded the .3 source and ran it on my
Hi,
Regarding the discrepancy in timings between me
and Justin Walker using OS X *intel* Mathematica,
it turns out I was running Mathematica on my
laptop under OS X via Rosetta, so those times should
be ignored. The timings I've posted under Linux are
all fine. SO, currently SAGE and Mathematic
On 8/1/07, Justin C. Walker <[EMAIL PROTECTED]> wrote:
> > The last version I posted to the list was significantly slower than
> > what
> > I have now (especially since I think I accidently had significant
> > improvements commented out in the last code I posted to the list.)
>
> That was the the
On Jul 31, 2007, at 22:54 , Jonathan Bober wrote:
> On Tue, 2007-07-31 at 22:16 -0700, Justin C. Walker wrote:
>> On Jul 31, 2007, at 18:36 , William Stein wrote:
[snip]
>> On a Core 2 Duo 2.33 Mhz, computing the number of partitions of 10^9:
>>Mathematica 5.2 (PartitionsP[10^9]:95.51
On Tue, 2007-07-31 at 17:35 -0500, Alec Mihailovs wrote:
> > Actually, even on my 32 bit core duo, the long double type seems to give
> > 64 bits of precision, so using it might help a little. Do you have any
> > idea how to check at run/compile time what the precision of a double or
> > a long do
On Tue, 2007-07-31 at 22:16 -0700, Justin C. Walker wrote:
[...]
> Checking partition count computation:
>
> On a Core 2 Duo 2.33 Mhz, computing the number of partitions of 10^9:
>Mathematica 5.2 (PartitionsP[10^9]:95.5115 s
>Sage 2.7.2.1 (number_of_partitions(10^9): 125.2 s
>
On Tue, 2007-07-31 at 22:16 -0700, Justin C. Walker wrote:
>
> On Jul 31, 2007, at 18:36 , William Stein wrote:
>
> >
> > Hi,
> >
> > Just to extend this thread some more, a few remarks.
>
> While we're extending this, here's my $0.02 canadian (:-})
[...]
> Checking partition count computatio
On 7/31/07, Justin C. Walker <[EMAIL PROTECTED]> wrote:
> While we're extending this, here's my $0.02 canadian (:-})
> > Yep -- Mathematica 5.2 interestingly totally sucks at
> > computing the number of partitions on an Intel OSX
> > machine... and SAGE rocks.
>
> Checking partition count computat
On Jul 31, 2007, at 18:36 , William Stein wrote:
>
> Hi,
>
> Just to extend this thread some more, a few remarks.
While we're extending this, here's my $0.02 canadian (:-})
> On sage.math:
> SAGE:
> sage: time n = number_of_partitions(10^7)
> CPU times: user 0.73 s, sys: 0.00 s, total: 0.73 s
Hi,
Just to extend this thread some more, a few remarks.
(1) quaddouble has been included in SAGE for several months
now, thanks to the hard work of Didier Deshommes and Robert
Bradshaw.
sage: RQDF
Real Quad Double Field
sage: RQDF(2).sin()
0.9092974268256816953960198659117448427022549714478902
From: "Alec Mihailovs" <[EMAIL PROTECTED]>
> Being an assembler programmer, I can say definitely that all FPU registers
> on x86 are 80-bit and all compilers that I know compile long double as
> 80-bit numbers.
>From other point of view, 80-bit real gives 64-bit precision in usual sense
(mantis
> Actually, even on my 32 bit core duo, the long double type seems to give
> 64 bits of precision, so using it might help a little. Do you have any
> idea how to check at run/compile time what the precision of a double or
> a long double is?
Being an assembler programmer, I can say definitely tha
Or probably 212 actually. :-)
On 31 Jul, 22:24, Bill Hart <[EMAIL PROTECTED]> wrote:
> I do highly recommend this quad double library by the way. And they've
> implemented all manor of transcendental functions too!! The quad-
> doubles would give you 206 bits, even on your machine.
>
> Bill.
>
>
I do highly recommend this quad double library by the way. And they've
implemented all manor of transcendental functions too!! The quad-
doubles would give you 206 bits, even on your machine.
Bill.
On 31 Jul, 21:33, Jonathan Bober <[EMAIL PROTECTED]> wrote:
> On Mon, 2007-07-30 at 17:24 -0700, B
Ah, that's better. Excellent. I feel much happier with this library
now.
Thanks.
Bill.
On 31 Jul, 21:38, "Mike Hansen" <[EMAIL PROTECTED]> wrote:
> From COPYING,
>
> "Redistribution and use in source and binary forms, with or without
> modification, are permitted provided that the following con
I believe that the IEEE standard guarantees you 80 bits (though it's
only 64 bits of mantissa or something like that). The trouble is, you
aren't guaranteed the IEEE standard.
I've spent much time researching this, but either I didn't look at the
right websites, or this stuff isn't documented wel
>From COPYING,
"Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met"
--Mike
On 7/31/07, Bill Hart <[EMAIL PROTECTED]> wrote:
>
> Oh well, I don't understand all this licensing stuff. So do you
> understand
On Mon, 2007-07-30 at 17:24 -0700, Bill Hart wrote:
> Wow!! Excellent work indeed.
>
> In fact on 64 bit X86 systems you could actually use the 128 bit long
> doubles to give you a little bit more precision (I believe it only
> gives you 80 bits including exponent and sign, so probably 64 bit
> m
Oh well, I don't understand all this licensing stuff. So do you
understand this license? What about derived works. Does that mean it
is not possible to modify this library and redistribute the modified
version?
In particular, as a C++ library it is no use to me unmodified. There
are also some fun
In this case, the license does not says that it is in the public domain
(but that it is a copyrighted work!), but you can use it as
"AS IS".
I think that the only condition that is imposed to us is
to include the declaimer.
Pablo
On 7/31/07, Pablo De Napoli <[EMAIL PROTECTED]> wrote:
> Certainly
Certainly no, not everything from a public institution is in the
public domain. This
should be analyzed case by case.
In case of doubt it would be better to ask the author.
Pablo
On 7/31/07, Bill Hart <[EMAIL PROTECTED]> wrote:
>
> Yes, I've ended up using Hida and Bailey's quad-double package.
Yes, I've ended up using Hida and Bailey's quad-double package. Very
cool.
But the license just says not to use the LB name to promote any
derived product. Am I right in assuming this is GPL compatible, i.e.
because they are a public institution everything is automatically
public domain?
Bill.
Last update, 2005?
On Mon, 30 Jul 2007, Bill Hart wrote:
>
> Hi Didier,
>
> Thanks. I also just found:
>
> http://www.nongnu.org/hpalib/
>
> which fascinates me. Has anyone used it?
>
> Bill.
>
>
> On 31 Jul, 01:46, "didier deshommes" <[EMAIL PROTECTED]> wrote:
>> 2007/7/30, Bill Hart <[EMAIL P
Hi Didier,
Thanks. I also just found:
http://www.nongnu.org/hpalib/
which fascinates me. Has anyone used it?
Bill.
On 31 Jul, 01:46, "didier deshommes" <[EMAIL PROTECTED]> wrote:
> 2007/7/30, Bill Hart <[EMAIL PROTECTED]>:
>
> > I have a similar problem in some code I am currently
> > writin
2007/7/30, Bill Hart <[EMAIL PROTECTED]>:
> I have a similar problem in some code I am currently
> writing. I need precisely quad precision, so mpfr is out of the
> question.
Hi Bill,
You might want to consider Yozo Hida's quaddouble C/C++ package here:
http://www.cs.berkeley.edu/~yozo/
There is
On 31 Jul, 01:24, Bill Hart <[EMAIL PROTECTED]> wrote:
> It would be interesting to see the time for Mathematica on a 32 bit
> X86 machine, since this would tell us if that is what they do.
Doh! I should have read William's timings more carefully. He gives the
times for a 32 bit machine. So I g
Wow!! Excellent work indeed.
In fact on 64 bit X86 systems you could actually use the 128 bit long
doubles to give you a little bit more precision (I believe it only
gives you 80 bits including exponent and sign, so probably 64 bit
mantissa).
It would be interesting to see the time for Mathemati
On 7/29/07, Pablo De Napoli <[EMAIL PROTECTED]> wrote:
> great work, Jonathan!
>
> I've tested, and I've found the following problems:
>
> 1) part 1 hangs
Yep. In the SAGE rapper for part, I just do that
as a special case (i.e., dont' call part for n <= 1)
> 2) compiling with -Wall gives this
great work, Jonathan!
I've tested, and I've found the following problems:
1) part 1 hangs
2) compiling with -Wall gives this warning
part.cc:865: warning: unused variable 'temp2'
3) part without arguments returns 42
Pablo
On 7/29/07, Jonathan Bober <[EMAIL PROTECTED]> wrote:
> Attached i
Attached is a somewhat improved version.
(I don't want to flood the mailing list with attachments, so I won't
send this file as an attachment to the list again after this.)
The main improvement is that the code is better commented and more
readable now. The secondary improvement is an increase i
William Stein wrote:
Hello,
>
>> (Random question, since it just caused problems for me again - it there
>> a way to sign up for sage-devel with a non gmail email address?)
>
> I don't know. I don't think so. Let me know if this isn't true.
In my experience the participation in google groups
On 7/28/07, Jonathan Bober <[EMAIL PROTECTED]> wrote:
>
> This is fine, except that my last name is Bober, not Bobber.
Oops, I'll fix that. Thanks.
> (Random question, since it just caused problems for me again - it there
> a way to sign up for sage-devel with a non gmail email address?)
I don
This is fine, except that my last name is Bober, not Bobber.
(Random question, since it just caused problems for me again - it there
a way to sign up for sage-devel with a non gmail email address?)
On Sat, 2007-07-28 at 16:58 -0700, William Stein wrote:
> On 7/28/07, Jonathan Bober <[EMAIL PROTE
On 7/28/07, Jonathan Bober <[EMAIL PROTECTED]> wrote:
> I've been working on a from-scratch implementation (attached). Right now
> it runs a faster than Ralf Stephan's part.c, but not as fast as we would
> like. (And it seems to work, although I can't guarantee that right
> now.)
Jonathan,
I've
On 7/28/07, Jonathan Bober <[EMAIL PROTECTED]> wrote:
> I've been working on a from-scratch implementation (attached). Right now
> it runs a faster than Ralf Stephan's part.c, but not as fast as we would
> like. (And it seems to work, although I can't guarantee that right
> now.)
Great work! Man
I've been working on a from-scratch implementation (attached). Right now
it runs a faster than Ralf Stephan's part.c, but not as fast as we would
like. (And it seems to work, although I can't guarantee that right
now.)
On my Core Duo 1.8 ghz , it computes p(10^8) in about 17 seconds,
compared to
On Jul 27, 2:16 am, Bill Hart <[EMAIL PROTECTED]> wrote:
> Alternatively you could just use the implementation here:
>
> http://www.ark.in-berlin.de/part.c
In fact, that was my attempt of a C implementation of this Pari
script:
* Psi(n, q) = local(a, b, c); a=sqrt(2/3)*Pi/q; b=n-1/24; c=sqrt(b);
On 7/27/07, Pablo De Napoli <[EMAIL PROTECTED]> wrote:
>
> Your version did work for me!
> Pablo
Great. Is there any chance you could work on any of the optimization or
timing ideas Bill Hart suggested in the previous post? I won't be able to
work on this myself for a while.
-- William
--~--
Your version did work for me!
Pablo
On 7/27/07, William Stein <[EMAIL PROTECTED]> wrote:
> On 7/27/07, Pablo De Napoli <[EMAIL PROTECTED]> wrote:
> >
> > I've tested it but seems to be buggy:: it works up to 156
> >
> > ./part 156
> > p(156)=
> > 73232243759
> >
> > but for 157 gives a floating
-- Forwarded message --
From: William Hart <[EMAIL PROTECTED]>
Date: Jul 27, 2007 11:58 AM
Subject: Re: [sage-devel] Re: computing the number of partitions of an integer
To: William Stein <[EMAIL PROTECTED]>
Some things to try (in order of importance):
* how long
On 7/27/07, Pablo De Napoli <[EMAIL PROTECTED]> wrote:
>
> I've tested it but seems to be buggy:: it works up to 156
>
> ./part 156
> p(156)=
> 73232243759
>
> but for 157 gives a floating point exception error
> (and a gdb tracing says it is in the line 176 of
> the source code
>
> r2 = r0 % r1;
This problem seems to be fixed in pari-2.4.2
(not yet realead!, code in CVS)
?numbpart((5*10535+4))
time = 16 ms.
%2 =
132775694853416614410709359082850304628573507097148711672236023178345995078715426574030466347126367130008280402558755564770977624160764152691390102001213796297899514590335375857
I've tested it but seems to be buggy:: it works up to 156
./part 156
p(156)=
73232243759
but for 157 gives a floating point exception error
(and a gdb tracing says it is in the line 176 of
the source code
r2 = r0 % r1;
in function g
I've compiled it using
gcc part.c -g -lmpfr -lm -DTEST_CO
Alternatively you could just use the implementation here:
http://www.ark.in-berlin.de/part.c
which seems to only rely on mpfr and GMP. However, since the Pari
version was based on it, I suspect it may also need correction.
He does seem to adjust the precision as he goes, but I've no idea how
fa
Here we go. Apparently g(h,q) = q*s(h,q) where s(h,q) is a dedekind
sum.
Then for h < q if r_0, r_1, ..., r_{n+1} are the remainders occurring
in the Euclidean algorithm when computing gcd(h,q) (of which there
should be about log q) then:
s(h,q) = 1/12 * sum_{i=1}^{n+1}((-1)^(i+1) (r_i^2 + r_{i-
OK, apparently the Dedekind sums should be done using an algorithm due
to Apostol. I haven't tracked it down yet, but this is clearly what we
need.
Bill.
On 26 Jul, 23:37, Bill Hart <[EMAIL PROTECTED]> wrote:
> On 26 Jul, 23:18, Jonathan Bober <[EMAIL PROTECTED]> wrote:
>
> > s(h,k) = h(k - 1)(2
On 26 Jul, 23:18, Jonathan Bober <[EMAIL PROTECTED]> wrote:
> s(h,k) = h(k - 1)(2k - 1)/(6k) - (k-1)/4 -
> (1/k) sum_{j=1}^{k-1} j*floor(hj/k).
>
> (To compute the floor in the inner sum, you can just use integer
> division.)
Yes, this will be faster. Of course integer
On Thu, 2007-07-26 at 13:10 -0700, Bill Hart wrote:
>
> Perhaps if one had a fast way of evaluating the Dedekind sum, one
> might have a chance.
>
> Bill.
>
I think this gives a faster way to compute it:
Write the sum as
s(h,k) = sum_{j=1}^{k-1} j/k [hj/k - floor(hj/k) - 1/2]
(This isn't s
There are two tricks I can see that are required to make this faster.
Firstly notice that L(n,q)*Psi(n,q) is relatively small once q gets
beyond a certain point. Thus L(n,q)*Psi(n,q) can be computed using
ordinary double precision for q greater than some very small limit
(depending on n). If one
Pari implements it, but incorrectly:
sage: number_of_partitions(5*10535+4,algorithm="pari")
132775694853416614410709359082850304628573507097
148711672236023178345995078715426574030466347126
367130008280402558755564770977624160764152691390
102001213796297899514590335375857238756973648770
246446295
I just found this on
http://reference.wolfram.com/mathematica/note/SomeNotesOnInternalImplementation.html:
"PartitionsP[n] uses Euler's pentagonal formula for small n, and the
non-recursive Hardy-Ramanujan-Rademacher method for larger n."
--Mike
On 7/26/07, Alec Mihailovs <[EMAIL PROTECTED]> wr
"William Stein" <[EMAIL PROTECTED]> writes:
> QUESTIONS: Why is Mathematica about 10 times faster than PARI
> at this? What are the best ways to compute the number
> of partitions of n? Is it a calculation involving fast arithmetic
> with dense polynomials of large degree, which would be best
From: "Mike Hansen" <[EMAIL PROTECTED]>
> I'm not sure where the bottleneck in PARI is since I can't imagine
> Mathematica uses a different method to compute the number of
> partitions.
I don't know what is used in the latest Mathematica version, but originally
NumberOfPartitions function in th
The fastest way I know of to compute the number of integer partitions
is to use the Hardy-Ramanujan-Rademacher formula ( see Rademacher
series on http://en.wikipedia.org/wiki/Integer_partition ), and I'm
pretty sure this is what PARI uses. From modules/part.c:
* This program is basically the i
On 7/25/07, Timothy Clemans <[EMAIL PROTECTED]> wrote:
> How do you in general find out how say Magma or Mathematica does something?
Short answer: it is often virtually impossible -- that's one of the main
reasons for the existence of SAGE. Read this page for the official
Mathematica stance on t
66 matches
Mail list logo