On Mar 31, 10:55 pm, "William Stein" <[EMAIL PROTECTED]> wrote:
> On Mon, Mar 31, 2008 at 6:48 PM, Roman Pearce <[EMAIL PROTECTED]> wrote:
>
> > You need "Algorithms for Computer Algebra" by Geddes, Czapor, and
> > Labahn:
> > Chapter 5: Chinese Remainder Theorem
> > Chapter 6: Newton's Iterat
Since the Debian distribution of SAGE uses Maxima with GCL list, I figured
I'd run the benchmarks Mike posted on my installation. The SAGE times are
comparable to those in Mike's test, while the Maxima tests are faster:
sage: load /home/tabbott/fermat_gcd_1var.py
sage: time a = p1.gcd(p2, algo
Hey Dan,
The testdoc.py file looks to be right. I just upgraded to Ubuntu 8.04
(32bit) and could not reproduce the problem with the 2.11 binary
posted on sagemath.org.
Has anyone else seen the problem Dan reported in 2.11?
Cheers,
Yi
On Mon, Mar 31, 2008 at 7:00 PM, Dan Drake <[EMAIL PROTECTED]
I've Debianized the GAP Guava package and scipy_sandbox packages, so now
the tests related to those pass.
However, I seem to have one new doctest failure.
sage -t devel/doc/const/const.tex
**
File "/home/tabbott/test2/const.p
On Mon, Mar 31, 2008 at 11:58 PM, root <[EMAIL PROTECTED]> wrote:
> >Michael Abshoff made that comment. He's motivated by wanting
> >to port Sage to a wide range of architectures and keep everything
> >maintainable, since he works incredibly hard on that. He suffers
> >a huge amount trying t
Hi!
I think, multivariate gcd is a neverending topic, discussed very often
in many communities.
Actually, I never tried to implement that and hopefully I never will.
But I *enjoyed* ;-) many discussions about it.
So, I think, if you want to have success, the first task is, what
Roman said:
Learni
>Michael Abshoff made that comment. He's motivated by wanting
>to port Sage to a wide range of architectures and keep everything
>maintainable, since he works incredibly hard on that. He suffers
>a huge amount trying to deal with build issues on various platforms
>such as solaris, Linux PPC, et
On Mon, Mar 31, 2008 at 11:21 PM, root <[EMAIL PROTECTED]> wrote:
> William,
>
>
> >By the way, Richard Fateman pointed out to me offlist that
> >Maxima running on top of clisp _might_ be much
> >slower than Maxima on gcl. This could be relevant to
> >our benchmarking.
>
> Not to start an im
William,
>By the way, Richard Fateman pointed out to me offlist that
>Maxima running on top of clisp _might_ be much
>slower than Maxima on gcl. This could be relevant to
>our benchmarking.
Not to start an implementation war but GCL compiles to C which
compiles to machine code whereas clisp is
>
> > Also, to Joel Mohler:
> > You need "Algorithms for Computer Algebra" by Geddes, Czapor, and
> > Labahn:
> > Chapter 5: Chinese Remainder Theorem
> > Chapter 6: Newton's Iteration and Hensel Lifting
> > Chapter 7: Polynomial GCD
> > Chapter 8: Polynomial Factorization
> >
> >
On Mon, Mar 31, 2008 at 6:48 PM, Roman Pearce <[EMAIL PROTECTED]> wrote:
>
> > The important thing isn't what algorithm is implemented, but that the
> > result is fast(er than Magma).
>
> The important thing is whether users can hope to get an answer at all.
> The old Singular factoring code w
On Mon, Mar 31, 2008 at 9:21 PM, Mike Hansen <[EMAIL PROTECTED]> wrote:
>
> I've posted some benchmarks at
> http://wiki.sagemath.org/MultivariateGCDBenchmarks .
>
Just out of curiosity, is Magma vastly faster
than everything else in every single benchmark? I'm having
some trouble matching up
I've posted some benchmarks at
http://wiki.sagemath.org/MultivariateGCDBenchmarks .
--Mike
On Mon, Mar 31, 2008 at 6:48 PM, Roman Pearce <[EMAIL PROTECTED]> wrote:
>
> > The important thing isn't what algorithm is implemented, but that the
> > result is fast(er than Magma).
>
> The important
On Mar 30, 2008, at 19:37 , William Stein wrote:
> Hello folks,
>
> Sage 2.11 has been released on March 30th, 2008. It is available at
>
> http://sagemath.org/download.html
Built on Mac OS X, 10.5.2 (Dual Quad Xeon, "-j6"):
The build was fine:
real 66m31.393s
user
On Mar 31, 2008, at 8:18 PM, Robert Bradshaw wrote:
>> By the way, I just did some cyclotomic polynomial benchmarking
>> in on Intel Core 2 Duo 2.6Ghz laptop. Here are the results:
>>
>> n| Sage 2.11 | newcyc at #2654 | PARI | Magma
>> 10^5 | 1.29 | 0.37 | 1.24 | 0.02
>
On Mon, 31 Mar 2008 at 05:28PM -0700, Yi Qiang wrote:
> Hi Dan,
> the dsage doctest failure you're seeing should not be happening with
> the official 2.11 release. Can you
>
> 1) Verify that you are running the official 2.11 release
sage: version()
'SAGE Version 2.11, Release Date: 2008-03-30'
Interesting course. Thanks for posting.
On Mon, Mar 31, 2008 at 9:04 PM, William Stein <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> This is video from lecture one of my Sage course
>
> http://video.google.com/videoplay?docid=-826792746508034&hl=en
>
> --
> William Stein
> Associate Professor
> The important thing isn't what algorithm is implemented, but that the
> result is fast(er than Magma).
The important thing is whether users can hope to get an answer at all.
The old Singular factoring code was hopeless and I bet multivariate
gcd
is still hopeless. And what about correct ? The
Hi,
This is video from lecture one of my Sage course
http://video.google.com/videoplay?docid=-826792746508034&hl=en
--
William Stein
Associate Professor of Mathematics
University of Washington
http://wstein.org
--~--~-~--~~~---~--~~
To post to this group, s
On Monday 31 March 2008 07:38:48 pm William Stein wrote:
> On Mon, Mar 31, 2008 at 4:27 PM, Mike Hansen <[EMAIL PROTECTED]> wrote:
> > Hi Roman,
> >
> > It seems that for characteristic 0, Maxima is quite a bit faster than
> > Singular, but there's some overhead in talking to Maxima over the
>
On Apr 1, 2:25 am, "Yi Qiang" <[EMAIL PROTECTED]> wrote:
> Parallel dsage doctesting simply will not work at this point and I am
> not sure how to fix that yet. Is there a way to declare doctests so
> they do not get executed in parallel? I imagine this functionality
> might be useful for other
Hi Dan,
the dsage doctest failure you're seeing should not be happening with
the official 2.11 release. Can you
1) Verify that you are running the official 2.11 release
2) Provide sage/dsage/tests/testdoc.py from your source tree.
I have a feeling that at least the dsage code did not get upgrad
Parallel dsage doctesting simply will not work at this point and I am
not sure how to fix that yet. Is there a way to declare doctests so
they do not get executed in parallel? I imagine this functionality
might be useful for other features as well (if not now, in the
future).
Cheers,
Yi
http://y
On Mar 31, 2008, at 3:09 PM, William Stein wrote:
> On Mon, Mar 31, 2008 at 12:43 PM, John Cremona
> <[EMAIL PROTECTED]> wrote:
...
> By the way, I just did some cyclotomic polynomial benchmarking
> in on Intel Core 2 Duo 2.6Ghz laptop. Here are the results:
>
> n| Sage 2.11 | newcyc at
On Apr 1, 1:58 am, Dan Drake <[EMAIL PROTECTED]> wrote:
> Just as a data point, I built 2.11 on the Ubuntu Hardy beta with
> (almost) no troubles. I suppose this isn't anything too surprising,
> since I don't think much changed with the compilers and so on.
>
> I am seeing big problems with the
Just as a data point, I built 2.11 on the Ubuntu Hardy beta with
(almost) no troubles. I suppose this isn't anything too surprising,
since I don't think much changed with the compilers and so on.
I am seeing big problems with the parallel testing. A usual 'make test'
passes with all tests (except
Hello folks,
by now you are all familiar with the drill :)
Sage 2.11 is out and I already started merging tickets for Sage 3.0.
But there are plenty of unreviewed tickets sitting in trac waiting
patiently for somebody to take a look.
>From the other perspective: If you have a patch sitting in t
On Mon, Mar 31, 2008 at 4:27 PM, Mike Hansen <[EMAIL PROTECTED]> wrote:
>
> Hi Roman,
>
> It seems that for characteristic 0, Maxima is quite a bit faster than
> Singular, but there's some overhead in talking to Maxima over the
> pexpect interface. In any case, it is still _way_ slower than w
Hi Roman,
It seems that for characteristic 0, Maxima is quite a bit faster than
Singular, but there's some overhead in talking to Maxima over the
pexpect interface. In any case, it is still _way_ slower than what we
need. For example, Maxima takes around 10 seconds to do the bivariate
gcd of de
On Mar 31, 2008, at 3:09 PM, William Stein wrote:
> On Mon, Mar 31, 2008 at 12:43 PM, John Cremona
> <[EMAIL PROTECTED]> wrote:
> >
> > There was some discussion a week or so ago about a more efficient
> > implentation of cyclotomic polynomials, which led to trac#2654. I
> > have tried vari
Please excuse a (possibly naive) suggestion, but why not use Maxima
for multivariate gcds and factorization ? I looked at the source code
and it appears to do Hensel lifting for both. That is the correct
algorithm that Sage appears to need. I'm not sure how to run it mod p
or over GF(p^q), but
On Mon, 31 Mar 2008, John Cremona wrote:
>
> There was some discussion a week or so ago about a more efficient
> implentation of cyclotomic polynomials, which led to trac#2654. I
> have tried various alternatives of the prototype code posted there,
> finding that the speed varied enormously a
On Mon, 31 Mar 2008, William Stein wrote:
>
> Hi John,
>
> Sure a special function to compute f(x^n) quickly seems like a good idea.
>
> By the way, I just did some cyclotomic polynomial benchmarking
> in on Intel Core 2 Duo 2.6Ghz laptop. Here are the results:
>
>
> Relevant code:
>
>
> {{{i
On Mon, Mar 31, 2008 at 3:24 PM, David Harvey <[EMAIL PROTECTED]> wrote:
>
>
> On Mar 31, 2008, at 6:09 PM, William Stein wrote:
>
> > Relevant code:
>
> [snip]
>
> To profile this properly, you shouldn't do it just at powers of ten,
> since the running time will depend heavily on the factorisatio
On Mar 31, 2008, at 3:43 PM, John Cremona wrote:
>
> There was some discussion a week or so ago about a more efficient
> implentation of cyclotomic polynomials, which led to trac#2654. I
> have tried various alternatives of the prototype code posted there,
> finding that the speed varied enormo
On Mar 31, 2008, at 6:09 PM, William Stein wrote:
> Relevant code:
[snip]
To profile this properly, you shouldn't do it just at powers of ten,
since the running time will depend heavily on the factorisation
pattern of n. I guess you should do some examples with lots of small
prime factor
On Mon, Mar 31, 2008 at 12:43 PM, John Cremona <[EMAIL PROTECTED]>
wrote:
>
> There was some discussion a week or so ago about a more efficient
> implentation of cyclotomic polynomials, which led to trac#2654. I
> have tried various alternatives of the prototype code posted there,
> finding th
There was some discussion a week or so ago about a more efficient
implentation of cyclotomic polynomials, which led to trac#2654. I
have tried various alternatives of the prototype code posted there,
finding that the speed varied enormously as a result of
trivial-seeming changes.
The key operati
On Mar 29, 2008, at 17:13 , Justin C. Walker wrote:
>
> On Mar 28, 2008, at 19:11 , mabshoff wrote:
>>
>> Hello folks,
>>
>> this is 2.11.alpha2. It is a little later than I had hoped and
>> planned mostly due to the fact that Easter and Spring break
>> put somewhat of a damper on development. So
On Mon, Mar 31, 2008 at 11:08 AM, Timothy G Abbott <[EMAIL PROTECTED]> wrote:
> Okay, I've created a patch to do this.
>
> However, it results in a number of doctest failures. They fall into two
> classes:
>
> 1) a bunch of errors, mostly in tut.py and the plotting code, that try to
> write t
Okay, I've created a patch to do this.
However, it results in a number of doctest failures. They fall into two
classes:
1) a bunch of errors, mostly in tut.py and the plotting code, that try to
write to the working directory from code like "a.save('a')".
2) Places where the doctests only work
On Mar 31, 7:30 pm, David Harvey <[EMAIL PROTECTED]> wrote:
> Begin forwarded message:
>
>
>
> > From: Genya Zaytman <[EMAIL PROTECTED]>
> > Date: March 31, 2008 1:15:21 PM EDT
> > To: David Harvey <[EMAIL PROTECTED]>
> > Subject: Fwd: Multivariate Polynomial Factoring is Broken
>
> > Begin forw
Thanks, good to know it's already a known/worked out issue.
On Mar 31, 12:07 pm, mabshoff <[EMAIL PROTECTED]
dortmund.de> wrote:
> On Mar 31, 5:53 pm, Walt <[EMAIL PROTECTED]> wrote:
>
> > Hi,
>
> > I got an error when building sage 2.11. Using gcc-4.3.0. Here's the
> > output:
>
>
>
> > Not s
Begin forwarded message:
> From: Genya Zaytman <[EMAIL PROTECTED]>
> Date: March 31, 2008 1:15:21 PM EDT
> To: David Harvey <[EMAIL PROTECTED]>
> Subject: Fwd: Multivariate Polynomial Factoring is Broken
>
>
> Begin forwarded message:
>> From: Genya Zaytman <[EMAIL PROTECTED]>
>> Date: March 31, 2
Dear William,
On Mar 31, 5:05 pm, "William Stein" <[EMAIL PROTECTED]> wrote:
> I'm not a computer scientist either, but I was as an undergrad, and
> I learned in Computer Science 101
So you are definitely more computer scientist than i.
I never attended a computer science class. It is (or used t
On Mar 31, 5:05 pm, "William Stein" <[EMAIL PROTECTED]> wrote:
> then it is impossible to have better complexity
> than linear in n, ...
that's true, but it is still possible to sum faster than linear -
using closed sum formulas (gosper factorization, zeilberger
algorithm,...)
so, the examples
On Mar 31, 5:53 pm, Walt <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I got an error when building sage 2.11. Using gcc-4.3.0. Here's the
> output:
> Not sure how to do what it says to do since I haven't actually used
> sage yet, any help is appreciated. Also, if you need more info just
> ask, I w
Hi,
I got an error when building sage 2.11. Using gcc-4.3.0. Here's the
output:
mpn_extras.o:mpn_extras.c:(.text+0x25f0): first defined here
long_extras.o: In function `__gmpz_abs':
long_extras.c:(.text+0x4070): multiple definition of `__gmpz_abs'
mpn_extras.o:mpn_extras.c:(.text+0x2630): firs
I was also totally confused by the idea to make sum faster than
linear, but I think I know where it came from. If you have infinitely
(or at least n/2) processors, you can do n/2 additions "in one step."
Then repeat it with the result and this gives you log_2(n) steps, but
on any partucular machin
On Mon, Mar 31, 2008 at 1:04 AM, Simon King <[EMAIL PROTECTED]> wrote:
>
> Dear Sage team,
>
> i made a few timings with the "sum" function:
>
> sage: L2=range(100)
> sage: L3=range(1000)
> sage: L4=range(1)
> sage: L5=range(10)
> sage: L6=range(100)
> sage: timeit('a=sum(L2,0)
On Mar 31, 1:53 pm, offtsing <[EMAIL PROTECTED]> wrote:
Hi,
> I'll try FLINT.And I have intel compiler 9.0 too, does it fit the c99
> standard?
technically it does, though I am fairly certain that it won't work as
is with the way Sage is currently set up. So far nobody has attempted
to use the
I'll try FLINT.And I have intel compiler 9.0 too, does it fit the c99
standard?
Thank you for your advices.
On Mar 31, 9:28 am, mabshoff <[EMAIL PROTECTED]
dortmund.de> wrote:
> On Mar 31, 3:10 am, guez offtsing <[EMAIL PROTECTED]> wrote:
>
> > I use Suse enterprise server 9 sp2-ia64, below is the
I put my patch up at http://trac.sagemath.org/sage_trac/ticket/2737 ,
but there is major code duplication with the product version.
--Mike
On Mon, Mar 31, 2008 at 4:40 AM, Joel B. Mohler <[EMAIL PROTECTED]> wrote:
>
> On Monday 31 March 2008 07:04:18 am Mike Hansen wrote:
> > Here are the timi
On Monday 31 March 2008, Simon King wrote:
>
> So, this is much faster than sum, but it seems to me that it still
> grows about linearly.
> I remember having heard somewhere that summation of a list can be done
> in logarithmic time.
> But i am absolutely not sure if this is really true, and like
> So, this is much faster than sum, but it seems to me that it still
> grows about linearly.
> I remember having heard somewhere that summation of a list can be done
> in logarithmic time.
> But i am absolutely not sure if this is really true, and likely i am
> mistaken.
> Hence, sorry for
On Monday 31 March 2008 07:04:18 am Mike Hansen wrote:
> Here are the timings I get by pretty much just copying balanced_list_prod.
About a month ago, I mailed sage-devel with a related issue:
sage: N=1000
sage: R.=QQ[]
sage: L2=[x^i for i in range(N)]
sage: sum(L2)
...
The above sum behaves qu
Dear Mike,
Thank you for your hint!
On Mar 31, 1:04 pm, "Mike Hansen" <[EMAIL PROTECTED]> wrote:
> Here are the timings I get by pretty much just copying balanced_list_prod.
>
> sage: timeit('a=balanced_sum(L2,0)')
> 625 loops, best of 3: 12.6 µs per loop
>
> sage: timeit('a=balanced_sum(L3,0)'
However, if I do the following, the timings look the same.
sage: sage: L6=srange(100)
sage: sage: timeit('a=sum(L6,0)')
5 loops, best of 3: 179 ms per loop
sage: sage: timeit('a=balanced_sum(L6,0)')
5 loops, best of 3: 202 ms per loop
--Mike
On Mon, Mar 31, 2008 at 4:04 AM, Mike Hansen <[EM
Here are the timings I get by pretty much just copying balanced_list_prod.
sage: L2=range(100)
sage: L3=range(1000)
sage: L4=range(1)
sage: L5=range(10)
sage: L6=range(100)
sage: timeit('a=balanced_sum(L2,0)')
625 loops, best of 3: 12.6 µs per loop
sage: timeit('a=sum(L2,0)')
625 loo
On Tue, Mar 25, 2008 at 5:02 PM, William Stein <[EMAIL PROTECTED]> wrote:
>
> On Tue, Mar 25, 2008 at 7:28 AM, Ondrej Certik <[EMAIL PROTECTED]> wrote:
> >
> > On Sun, Mar 23, 2008 at 8:37 PM, William Stein <[EMAIL PROTECTED]> wrote:
> > >
> > > Hi,
> > >
> > > If you're a student an
Hi Simon,
sum() is a builtin Python function. There has been passing discussion
of making an asymptotically fast sum function, but I don't think that
anyone has gotten around to it. There is a balanced product which you
can find
http://www.sagemath.org/hg/sage-main/file/cc1e12a492fc/sage/misc/
Splitting it like this still yields a linear algorithm. If f(n) is
the time to add a list of length n, then you have
f(n) = 2*f(n/2), so f(n) is linear.
David
On Mon, Mar 31, 2008 at 1:04 AM, Simon King <[EMAIL PROTECTED]> wrote:
>
> Dear Sage team,
>
> i made a few timings with the "sum" func
Dear Sage team,
i made a few timings with the "sum" function:
sage: L2=range(100)
sage: L3=range(1000)
sage: L4=range(1)
sage: L5=range(10)
sage: L6=range(100)
sage: timeit('a=sum(L2,0)')
625 loops, best of 3: 299 µs per loop
sage: timeit('a=sum(L3,0)')
125 loops, best of 3: 1.35 ms
63 matches
Mail list logo