On Thu, Oct 22, 2009 at 9:54 AM, Robert Bradshaw
wrote:
> This is a known limitation (due to a workaround for a fixed MPFR bug.)
> See http://trac.sagemath.org/sage_trac/ticket/2567 . Should be an easy
> fix.
I've posted a patch for this.
--Mike
--
To post to this group, send an email to sage-
Hi Kiran,
On Thu, Mar 4, 2010 at 3:12 PM, Minh Nguyen wrote:
> See ticket #8432 for an implementation of this
> approach.
>From your install log, I see that your build also has gotten past
installing iconv. This makes me doubt that the solution at #8432 would
solve the build problem you repor
Hi Kiran,
On Thu, Mar 4, 2010 at 2:36 PM, Kiran Kedlaya wrote:
> As expected based on my experience with 4.3.3, I got a build error
> building 4.3.4.alpha0, though this time it was a linking error with gd
> rather than cddlib. Again, this is Fedora 10 on a 64-bit system, but
> on a 32-bit network
The coercion question in the first post in this thread is still open.
--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/gro
Problem solved. The list of zeros was mis-formatted (by myself).
Thanks Leif!
Here is the Pythonized version, for reference.
I made the table of zeros incorrectly. I used the code:
print "float zz[]={"
for i in range(1,6000):#len(zz)):
s=str(zz[i])+","
i+=1
s+=str(zz[i])+","
i
I just tried building MPIR 1.3.1 on /tmp -- a local file system --
instead of the network file system I was using before. Suddenly, it
works!
I assume the next Sage release will include this MPIR revision. Sage
4.3.3 (MPIR 1.2.2) fails to build yasm as before.
Thanks!
- Ryan
--
To post to th
Hi,
I do not know whether this message is for here or sage-marketing or
sage-edu... they are rather technical.
I just opened a SAGE wiki and a discussion list in french after the
education day we had during the SAGE days 20 in Marseille (http://
groups.google.com/group/sage-support/browse_thread/
> Oddly, this worked for me. But... the (rather cryptic) answer should
> give you an idea of what's going on:
Oh, that's my mistake -- I didn't include the line 'from math import log'
--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an
> > That sounds exiting, are there also plans to implement "discrete"
> > fractals? (combinat.WordMorphisms and word-paths and things like
> > that?)
>
> >http://www.sagemath.org/doc/reference/sage/combinat/words/paths.html
> >http://alexis.monnerot-dumaine.neuf.fr/articles/fibonacci%20fractal.pdf
> > That sounds exiting, are there also plans to implement "discrete"
> > fractals? (combinat.WordMorphisms and word-paths and things like
> > that?)
>
> >http://www.sagemath.org/doc/reference/sage/combinat/words/paths.html
> >http://alexis.monnerot-dumaine.neuf.fr/articles/fibonacci%20fractal.pdf
Oops, I forgot to include the polynomial:
sage: x = polygen(QQ)
sage: f = x^3 - 3024*x + 46224
Perhaps the problem is only when the roots are all real?
On Wed, Mar 3, 2010 at 12:59 PM, John Cremona wrote:
> What is your complete code: I had no trouble with
>
> sage: x=polygen(QQ)
> sage: f=x^2
On Mar 3, 10:17 pm, Jonathan Bober wrote:
> The first depends mostly on
> hard drive speed ...
To be exact, it also depends on the filesystem :) i've ext4...
And one small idea, to clear the disk cache in linux do
sync ; sudo sh -c "echo 3 | tee /proc/sys/vm/drop_caches"
http://boxen.math.wash
If you need some more examples, with buffer/cache cleared.
tomahawk-~ $ time mupkern input
**MuPAD Pro 4.5.0 -- The Open Computer Algebra System
/| /|
** |Copyright (c) 1997 - 2007 by SciFace Software
| *--|-* All rights reserved.
|/ |/
**
Running these tests gives some information, but it is probably a little
hard to interpret. On a fresh boot, sage will take roughly 18 seconds to
start up on my machine. Subsequent runs, however, take roughly 1.8
seconds, typically.
This is all dependent on many things that the operating system doe
What is your complete code: I had no trouble with
sage: x=polygen(QQ)
sage: f=x^2+1
sage: R. = NumberField(f,'a')
sage: a.n()
1.00*I
but f=x^2-2 gave the problem.
John
On 3 March 2010 18:32, Tom Boothby wrote:
> I found a distressing bug just now:
>
> sage: R. = NumberField(f,'a')
On 03/03/2010 10:50 AM, kstueve wrote:
> #include
> #include
What happens if you insert
#include
here?
Compiling with extra warning flags *may* help:
$ gcc -o fastli fastli.c -lm -O2 -W -Wall
fastli.c: In function ‘main’:
fastli.c:29: warning: implicit declaration of function ‘atof’
fastli.
On Wed, Mar 3, 2010 at 11:29 AM, Harald Schilly
wrote:
> On Mar 3, 6:16 pm, "Dr. David Kirkby" wrote:
>> time echo "2+2;" | /absolute/path/to/sage
>>
>
> For me:
> real 0m24.730s
> user 0m6.712s
> sys 0m1.884s
>
> real 0m5.127s
> user 0m4.348s
> sys 0m0.784s
>
> real 0m5.24
On Mar 3, 6:16 pm, "Dr. David Kirkby" wrote:
> time echo "2+2;" | /absolute/path/to/sage
>
For me:
real0m24.730s
user0m6.712s
sys 0m1.884s
real0m5.127s
user0m4.348s
sys 0m0.784s
real0m5.245s
user0m4.560s
sys 0m0.840s
It's a N270 atom netbook, my hdd is
$ su
> li(10**10,39.0)
> -101.969133925197
> li(10**10,39)
> li: unable to attain the desired precision
Oddly, this worked for me. But... the (rather cryptic) answer should
give you an idea of what's going on:
624/1217*sin(39*log(100))/log(100) +
8/1217*cos(39*log(100)
I've been working more on TOS's Li based pi(x) approximation code.
I've been trying to optimize it in c. It seems that I need someone
more knowledgeable than myself in c to point out some simple mistake I
am making that is preventing the code from giving the correct answer.
I tried copying and pas
Hi folks,
On Thu, Mar 4, 2010 at 5:05 AM, Minh Nguyen wrote:
> * The following tests failed on sage.math:
They also fail on bsd.math and rosemary.math. This issue is now
tracked at ticket #8430:
http://trac.sagemath.org/sage_trac/ticket/8430
--
Regards
Minh Van Nguyen
--
To post to this
I found a distressing bug just now:
sage: R. = NumberField(f,'a')
sage: a.n()
Traceback (most recent call last):a.n()
File "", line 1, in
File "/tmp/tmp8r5Xa6/___code___.py", line 3, in
exec compile(ur'a.n()' + '\n', '', 'single')
File "", line 1, in
File "element.pyx", line 4
> 1) Sun Blade 2000, circa 2000
> 2 x 900 MHz UltraSPARC III+ CPUs
> Load average 1 (Sorry, I'm doing something and can't stop that)
> 1x 147 GB Seagate SEAGATE-ST3146807FC. 15,000 rpm SCSI with a 2 Gbit/s fibre
> channel interface.
> Sage version 4.3.3 with patches for Solaris as documented
Hi folks,
This release incorporates many combinatorics tickets positively
reviewed during and/or before Sage Days 20.
Source tarball:
http://sage.math.washington.edu/home/release/sage-4.3.4.alpha0/sage-4.3.4.alpha0.tar
Binary for sage.math:
http://sage.math.washington.edu/home/release/sage-4.3
2) Macbook Pro, circa 2009
2.66GHz Intel Core 2 Duo
Fairly loaded (Activity Monitor reports about 70% idle CPU, 635MB/4GB free
memory)
But the conclusion seems fairly clear.
Median time of 5 runs:
real0m2.390s
Maximum of the 5 runs:
real0m22.064s
The maximum was first, and afterward all
On Wed, Mar 3, 2010 at 9:12 AM, Florent Hivert
wrote:
> Hi William,
>
> On Wed, Mar 03, 2010 at 05:48:28AM -0800, William Stein wrote:
>> On Tue, Mar 2, 2010 at 7:56 PM, Dr. David Kirkby
>> wrote:
>> >> Right now it takes over 1.5 seconds every time.
>> >> wst...@sage:~$ time sage -c "pri
there are interactions with lapack and atlas in CVXOPT, for instance.
On Mar 3, 2:35 pm, Jason Grout wrote:
> I couldn't find any good spline routines in Sage for constructing simple
> splines with given boundary conditions (are there any? There are some
> spline routines in scipy, but not what
On 3 March 2010 17:03, Harald Schilly wrote:
> On Mar 3, 5:05 pm, William Stein wrote:
>> A couple of days ago, I put the following on my website, since I get a
>> really *huge* amount of off-list email directed at me about Sage.
>
> I've also put that up here: http://sagemath.org/contact.html
Y
Ticket http://trac.sagemath.org/sage_trac/ticket/8254
"sage takes way too long to startup"
seems to irritate a lot of people. It does not me too much, but I feel one way
to at least start to tackle this probably is to get some quantifiable data and
see where the time is being spent. My hunch i
Hi William,
On Wed, Mar 03, 2010 at 05:48:28AM -0800, William Stein wrote:
> On Tue, Mar 2, 2010 at 7:56 PM, Dr. David Kirkby
> wrote:
> >> Right now it takes over 1.5 seconds every time.
> >> wst...@sage:~$ time sage -c "print factor(2010)"
> >> 2 * 3 * 5 * 67
> >> real 0m1.535s
> >>
On Mar 3, 5:05 pm, William Stein wrote:
> A couple of days ago, I put the following on my website, since I get a
> really *huge* amount of off-list email directed at me about Sage.
I've also put that up here: http://sagemath.org/contact.html
H
--
To post to this group, send an email to sage-de
I was able to realize the desired result by modifying some lines in
sphinx/highlighting.py:
try:
if self.dest == 'html':
# Add '>>> ' and '... ' as appropriate if really want
interactive mode.
source_copy = ''
line_count = 0
for line in s
Martin Rubey wrote:
Personally I have a bit of a problem understanding why I need to
worry about a program starting up in less than 2 s, when I might run
something on it which will take at least one order of magnitude
longer, and probably several order of magnitudes longer.
I can only say why i
> Personally I have a bit of a problem understanding why I need to
> worry about a program starting up in less than 2 s, when I might run
> something on it which will take at least one order of magnitude
> longer, and probably several order of magnitudes longer.
I can only say why it matters for
Hi,
A couple of days ago, I put the following on my website, since I get a
really *huge* amount of off-list email directed at me about Sage.
Since I'm going to stick to it, and it's relevant to many people on
these lists I'm posting this here, so people will know.
"WARNING: If you send me an unso
mhampton wrote:
There has been some previous discussion about this on sage-devel, I
can't find exactly the thread I remember but here's a somewhat related
one:
http://groups.google.com/group/sage-devel/browse_thread/thread/b91c51672ae0f475/
Thank you.
Personally I think it makes sense to put
William Stein wrote:
On Tue, Mar 2, 2010 at 7:56 PM, Dr. David Kirkby
wrote:
Right now it takes over 1.5 seconds every time.
wst...@sage:~$ time sage -c "print factor(2010)"
2 * 3 * 5 * 67
real0m1.535s
user0m1.140s
sys 0m0.460s
Personaly I don't find that too excessive for a l
On 03/03/2010 05:48 AM, William Stein wrote:
> Pari 0.030s
> Python 0.046s
> Maple 0.111s
> Maxima 0.456s
> Mathematica0.524s
> Matlab 0.844s
> Magma 0.971s
> Sage 1.658s
>
> This is probably the only benchmark that involves a "functio
2010/3/3 William Stein :
> On Tue, Mar 2, 2010 at 7:56 PM, Dr. David Kirkby
> wrote:
>>> Right now it takes over 1.5 seconds every time.
>>> wst...@sage:~$ time sage -c "print factor(2010)"
>>> 2 * 3 * 5 * 67
>>> real 0m1.535s
>>> user 0m1.140s
>>> sys 0m0.460s
>>
>> Personaly I don
There are two test suites with validated results at
http://axiom-developer.org/axiom-website/CATS/
The CATS (Computer Algebra Test Suite) effort targets
the development of known-good answers that get run
against several systems. These "end result" suites test
large portions of the system. As they
On Tue, Mar 2, 2010 at 7:56 PM, Dr. David Kirkby
wrote:
>> Right now it takes over 1.5 seconds every time.
>> wst...@sage:~$ time sage -c "print factor(2010)"
>> 2 * 3 * 5 * 67
>> real 0m1.535s
>> user 0m1.140s
>> sys 0m0.460s
>
> Personaly I don't find that too excessive for a larg
Joshua Herman wrote:
Is there a mathematica test suite we could adapt or a standardized set
of tests we could use? Maybe we could take the 100 most often used
functions and make a test suite?
I'm not aware of one. A Google found very little of any real use.
I'm sure Wolfram Research have such
There has been some previous discussion about this on sage-devel, I
can't find exactly the thread I remember but here's a somewhat related
one:
http://groups.google.com/group/sage-devel/browse_thread/thread/b91c51672ae0f475/
Personally I think it makes sense to put the most effort into getting
sa
Hi William,
> I recall many years ago programming the 80387 maths coprocessor chip at the
> assembly level to generate the fastest Mandlebrot set I could. If I recall
> correctly, it ran at 25 MHz, which I think was the fastest any 80386/80376
> chip run at.
Wow ! Advanced technology ! I
44 matches
Mail list logo