[sage-devel] compiling sage_c_lib-2.1.4

2007-02-27 Thread Carl Hansen

compiling /sage_c_lib-2.1.4

the problem:
In file included from 
/opt2/local/sage-2.1.4/local/include/python2.5/Python.h:8,
from src/stdsage.h:35,
from src/interrupt.c:13:
/opt2/local/sage-2.1.4/local/include/python2.5/pyconfig.h:917:1: 
warning: "_FILE_OFFSET_BITS" redefined
In file included from /usr/include/stdio.h:22,
from src/interrupt.c:12:
/opt2/local/encap/gcc-4.1.2/bin/../lib/gcc/sparc-sun-solaris2.10/4.1.2/include/sys/feature_tests.h:197:1:
 
warning: this is the location of the previous definition

the reason:
in feature_tests.h:
"* In the 32-bit environment, the default value is 32; if not set, set it to
* the default here, to simplify tests in other headers.
#ifndef _FILE_OFFSET_BITS
#define _FILE_OFFSET_BITS   32
"
in pyconfig.h:
#define _FILE_OFFSET_BITS 64

the "solution":
in src/interrupt.c
change
#include 
#include "stdsage.h"
#include "interrupt.h"
to
#include "stdsage.h"
#include "interrupt.h"
#include 




problem 2

src/interrupt.h:147: error: expected specifier-qualifier-list before 
'typedef'
the solution:
on line 93
struct sage_signals {
   [.]
#elif defined (__sun__) || defined (__sun)   /* Solaris */
typedef void (*__sighandler_t )();
[...]
   }  

change to

typedef void (*__sighandler_t )();
struct sage_signals {
   [.]
#elif defined (__sun__) || defined (__sun)   /* Solaris */
[...]
   }
   In other words, move the typedef  out from the definition of the struct.

   Then it compiles. Whether it's RIGHT or not, who knows.

   Solaris 10 gcc 4.1.2


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: compiling sage_c_lib-2.1.4

2007-02-27 Thread William Stein

Thanks.  I've put these in sage-2.2.

On 2/27/07, Carl Hansen <[EMAIL PROTECTED]> wrote:
>
> compiling /sage_c_lib-2.1.4
>
> the problem:
> In file included from
> /opt2/local/sage-2.1.4/local/include/python2.5/Python.h:8,
> from src/stdsage.h:35,
> from src/interrupt.c:13:
> /opt2/local/sage-2.1.4/local/include/python2.5/pyconfig.h:917:1:
> warning: "_FILE_OFFSET_BITS" redefined
> In file included from /usr/include/stdio.h:22,
> from src/interrupt.c:12:
> /opt2/local/encap/gcc-4.1.2/bin/../lib/gcc/sparc-sun-solaris2.10/4.1.2/include/sys/feature_tests.h:197:1:
> warning: this is the location of the previous definition
>
> the reason:
> in feature_tests.h:
> "* In the 32-bit environment, the default value is 32; if not set, set it to
> * the default here, to simplify tests in other headers.
> #ifndef _FILE_OFFSET_BITS
> #define _FILE_OFFSET_BITS   32
> "
> in pyconfig.h:
> #define _FILE_OFFSET_BITS 64
>
> the "solution":
> in src/interrupt.c
> change
> #include 
> #include "stdsage.h"
> #include "interrupt.h"
> to
> #include "stdsage.h"
> #include "interrupt.h"
> #include 
>
>
>
>
> problem 2
>
> src/interrupt.h:147: error: expected specifier-qualifier-list before
> 'typedef'
> the solution:
> on line 93
> struct sage_signals {
>[.]
> #elif defined (__sun__) || defined (__sun)   /* Solaris */
> typedef void (*__sighandler_t )();
> [...]
>}
>
> change to
>
> typedef void (*__sighandler_t )();
> struct sage_signals {
>[.]
> #elif defined (__sun__) || defined (__sun)   /* Solaris */
> [...]
>}
>In other words, move the typedef  out from the definition of the struct.
>
>Then it compiles. Whether it's RIGHT or not, who knows.
>
>Solaris 10 gcc 4.1.2
>
>
> >
>


-- 
William Stein
Associate Professor of Mathematics
University of Washington

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: quad double timings and accuracy

2007-02-27 Thread William Stein

On Monday 26 February 2007 7:18 pm, Robert Bradshaw wrote:
> Shouldn't the error on a quad double be way smaller than this? I'm
> not sure what specific numbers you're operating on, but if your
> answers are on the order of 10^0, then shouldn't you have around 63
> decimal digits of accuracy, rather than just 4 more orders of
> magnitude? Wouldn't an error of 1e-17 be like using mpfr with ~60+ bits?

Yeah, I really don't get it.  Quad double should give results correct
to (nearly) 212 bits, or what is the point of using quad double at all?
Something really funny is going on. 

> I guess what I'd like to see to understand this better is the
> absolute magnitude of cos(1) between rdf, qr, mpfr(212), and mpfr(1000).
>
> On Feb 26, 2007, at 7:07 PM, didier deshommes wrote:
> > How accurate are these results? The error is quite small and more
> > accurate than computing with ieee doubles (most of the time, about 4
> > orders of magnitude). Here:
> > -- "mpfr vs qd " is the absolute error between a quad double and mpfr
> > real, and
> > -- "mpfr vs rd"  is the absolute error in between a real double and
> > mpfr real:
> >
> > cos:
> > mpfr vs qd: 5.4180459105735642433E-17
> > mpfr vs rd: 3.57935903139e-13
> >
> > sin:
> > mpfr vs qd : 4.9262450620608075647E-17
> > mpfr vs rd :4.22384349719e-13
> >
> > tan:
> > mpfr vs qd : 1.0996009735470526760E-16
> > mpfr vs rd : 1.37401201528e-12
> >
> > acos:
> > mpfr vs qd : 1.0587913940429450042E-16
> > mpfr vs rd : 1.95518601309e-12
> >
> > asin:
> > mpfr vs qd : 8.8793698896573320837E-17
> > mpfr vs rd : 1.95532479097e-12
> >
> > atan:
> > mpfr vs qd : 4.2348407244178416828E-17
> > mpfr vs rd : 4.09228206877e-13
> >
> > cosh:
> > mpfr vs qd : 1.1001972366209892607E-16
> > mpfr vs rd : 4.91606755304e-13
> >
> > sinh:
> > mpfr vs qd : 7.7307263905133232438E-17
> > mpfr vs rd : 6.54809539924e-13
> >
> > tanh:
> > mpfr vs qd : 5.0901691104837936913E-17
> > mpfr vs rd : 4.08617584213e-13
> >
> > cosh:
> > mpfr vs qd NAN
> > mpfr vs rd nan
> >
> > sinh:
> > mpfr vs qd : 5.0731042379144584142E-17
> > mpfr vs rd : 4.23105994685e-13
> >
> > tanh:
> > mpfr vs qd : 1.9007614867237325552E-16
> > mpfr vs rd : 8.84181616811e-12
> > ##
> >
> > In conclusion:
> > In most cases it is faster to compute with quad double reals instead
> > of using mpfr reals at 212 bits. In all cases quad doubles are more
> > accurate than simple ieee doubles.
> >
> > didier
>
> 
-- 
William Stein
Associate Professor of Mathematics
University of Washington

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: sage-2.2

2007-02-27 Thread Jaap Spies

William Stein wrote:

> make the official release of 2.2.  I'll hopefully make an alpha release
> sometime tonight, which people can build and test out.

On FC 5:

--
All tests passed!
Total time for all tests: 911.3 seconds
[EMAIL PROTECTED] sage-2.2.alpha]$


Jaap


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: sage-2.2

2007-02-27 Thread William Stein

On Tuesday 27 February 2007 7:19 am, Jaap Spies wrote:
> William Stein wrote:
> > make the official release of 2.2.  I'll hopefully make an alpha release
> > sometime tonight, which people can build and test out.
>
> On FC 5:
>
> --
> All tests passed!
> Total time for all tests: 911.3 seconds

Thanks.  Unfortunately, I haven't actually made an alpha release of 2.2. 
The sage-2.2.alpha I had in my home directory (under tmp), was from 2 weeks
ago, when I was considering making one and changed my mind.  I'll post
an announcement when I make an alpha release. 

William

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: quad double timings and accuracy

2007-02-27 Thread didier deshommes

On 2/26/07, Robert Bradshaw <[EMAIL PROTECTED]> wrote:
>
> Shouldn't the error on a quad double be way smaller than this? I'm
> not sure what specific numbers you're operating on, but if your
> answers are on the order of 10^0, then shouldn't you have around 63
> decimal digits of accuracy, rather than just 4 more orders of
> magnitude? Wouldn't an error of 1e-17 be like using mpfr with ~60+ bits?

I must have somehow miscompiled the library (again). I will look at it
more closely later.

didier

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Multivariate polynomial benchmark

2007-02-27 Thread mabshoff

Hello,

I did exchange a couple emails with Martin Albrecht over the last
couple days about the benchmarks he did comparing multivariate
polynomial arithmetic in Singular and Magma. I then did run some of
the benchmarks with CoCoALib 0.97CVS and I had some suggestions on how
to do things differently. After searching for a set of benchmarks I
pretty much came up empty handed.

I could only find one recent paper: FRPOLY: A Benchmark Revisited (see
http://www.brics.dk/~hosc/local/LaSC-4-2-pp155-164.pdf ) - but I
didn't look for papers very long either and I currently do not have
access to my library. Any pointers would be appreciated.

The lisp code referred to in FRPOLY can be found at
http://www.koders.com/lisp/fid5977A8A29DAE1A62638CE7BEFCE391E4C2CCF2C3.aspx

I would suggest something along the following lines for the MVPoly
benchmark:

Have a matrix of test cases:

*number of indeterminates:
 - small (3 indeterminates)
 - medium (10 indeterminates)
 - large (25 indeterminates)
*length of polynomials
 - small (3 monoids)
 - medium (10 monoids)
 - large (25 monoids)
* Ring/Algebra
 - Z (no very interesting - at least to me :))
 - Z2 (special case for certain fields like crypto research - I do
care about that)
 - Zp small, i.e. p=32003
 - Zp huge  i.e. p=something prime beyond 2^32 (not everybody has that
implemented, at least not efficiently)
 - Q small
 - Q large
 - Weyl Algebras, non-commutative Algebras in general - rather exotic,
not present everywhere
* Term Ordering - not sure about the impact of that - this might
disappear:
 - DegRevLex
 - DegLex
 - Lex
 - Ordering Matrix

Depending on how many options you select and after computing the
cartesian product of all selected options you end up with lots of
tests. To graph the result I really like what was done for FLINT by
plotting the cases in a plane with colored circles of different size
signifying who was faster by what factor.

Operations:
 - addition
 - mutiply
 - power small
 - power huge
 - GCD

adding should be pretty boring,  small powers are probably, too. Can
anybody else suggest more operations?

A couple remarks:
* I would measure time *and* memory.
* Instead of a certain number of repetitions for each benchmark I
would run for a number of seconds and count the number of operations
completed in that timeframe. The time allocated would depend on how
difficult a certain task is, i.e. multiplying with small coefficients
is obviously much cheaper compared to large coefficients in Q. The
total runtime of a given benchmark would be the product of the weights
assigned to each characteristic, i.e. 25 indeterminates would give a
factor of 5, 25 monoids a factor of 3, Q small a factor of 1.5, so
5*3*1.5=22.5 times longer than the base line. That way the performance
of the more difficult operations would not vary as much statistically
and the total time for the benchmark run could be computed ahead of
time. That way you know how long you can take a break :)
* Use something like a 1GHz P3 as a baseline. If you feel like running
a benchmark on you Sparc Station 20 in the basement you will not get
any useful result, but hopefully the benchmark will not lose its value
in a couple years and the results will remain comparable unlike say
SPEC. You can obviously always dial up the length of the polynomials
as well as the number of indeterminates.

DISCLAIMER: I am a developer of the CoCoALib, so I would like to get
input from the developers of Singular, Magma and any other project
interested in participating in order to avoid favoring my own system.

Cheers,

Michael

PS: We will be releasing CoCoALib 0.98 in roughly 4 to 5 days, so if I
don't reply quickly I am probably doing last minute fixes.


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] sage-2.2-alpha

2007-02-27 Thread William Stein

Hi,

I've put sage-2.2.alpha3 here:

   http://sage.math.washington.edu/home/was/pkgs/

Any build feedback will be appreciated (except on sage.math -- I'm 
already building there...)

-- 
William Stein
Associate Professor of Mathematics
University of Washington

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: NetworkX Development

2007-02-27 Thread Robert Miller

Aric,

I've been working in a few different directions lately:
1- I'm working on a C implementation of the base class structure,
which will vastly speed up many algorithms. I'm not sure whether you
want to keep everything in Python or not, but Python is great at
interfacing C.
2- I'm reading Brendan McKay's paper on graph isomorphisms, and
reproducing it piece by piece in python. Eventually, I hope to provide
an open source alternative to his eminent yet restrictively licensed
program nauty (http://cs.anu.edu.au/~bdm/nauty/   ---   note the
non-military restriction).
3- I have some immediate ideas for speeding up the spring layout algorithm.

If I implement my changes to, say, the spring layout function, how do
I go about submitting those changes to you? Are you using a revision
control system? Would this involve the mailing lists on sourceforge?

Eagerly awaiting response,

Robert L Miller

On 2/20/07, Aric Hagberg <[EMAIL PROTECTED]> wrote:
> Hi Robert,
> [also cc: to Dan]
>
> Thanks for your note.  I do know about SAGE (I saw William talk about
> it at Scipy06) and I like the approach you are taking.  We're happy to
> work with you in whatever way to help make NewtworkX work with SAGE
> and would be grateful to have you or any of the team help with
> development.
>
> Briefly, here is the current status of NetworkX as I see it:
>
> Dan Schult and I are the primary developers.
> Development is a slower than I like since neither of us
> are paid to work on software (but we do as part of our research).
>
> There are plenty of parts that could use improvement, modification,
> or addition.  For example
>
> - The documentation is incomplete.  I've been watching what your
>   project and numpy is doing regarding standard formats.
>
> - Many standard graph theory parts are missing since we primarily
>   added code that solved our research problems without any
>   systematic attempt to cover all of graph theory.
>
> - There are some warts that we'd like to fix.  One example is the
>   inability for us to enforce which functions work with which
>   algorithms.  E.g. XGraph with multiple edges probably breaks
>   many algorithms.  We have had some internal discussions on this
>   and could use a fresh perspective.
>
> - The drawing is a hack that I made because everyone wants to "see"
>   the graph.  So I hacked up something with matplotlib and there is an
>   interface to graphviz. Plenty of room for improvement there
>   including adding interactive control of the drawing in matplotlib.
>   John Hunter showed me how to do it but I haven't had the time
>   to follow up on that.  Also there are many layout algorithms
>   that could be added.
>
> I'll take a look at what you have done in SAGE and let me know
> what you think is the best way to proceed.  The sourceforge site,
> which hosts the mailing lists, is currently offline but hopefully
> will be back real soon.
>
> Aric
>
> On Tue, Feb 20, 2007 at 12:15:51AM -0800, Robert Miller wrote:
> > Hello Aric,
> >
> > My name is Robert Miller, and I am a mathematics graduate student at
> > the University of Washington, Seattle. I am also a developer for the
> > open source mathematics program SAGE
> > (http://sage.math.washington.edu/sage/). We have recently included
> > NetworkX, after determining that it is the best open source graph
> > theory software out there. I have noticed some places where the code
> > could be improved, and I would very much like to be a part of the
> > development team of NetworkX. I have already submitted a couple things
> > to the trac, as rlmill. You can see what we've done so far in SAGE at
> >
> > http://sage.math.washington.edu:9001/graph
> >
> > In particular, you can check out the survey we did of every piece of
> > software we could find, which led us to believe that yours was the
> > best. Also, if you'd like to try out the software itself, you can go
> > to
> >
> > http://sage.math.washington.edu:8100/graphs
> >
> > I've left a few examples up there for you. Most of what I've done so
> > far has been pretty basic. I've been trying to implement everyone's
> > requests, which for the most part have been for "pretty graphics" type
> > features. You should in particular check out
> >
> > http://sage.math.washington.edu:9001/graph_plotting
> >
> > which shows my latest accomplishment, "pretty" 3d pictures of graphs.
> > I look forward to hearing back from you soon.
> >
> > --
> > Robert L. Miller
> > http://www.robertlmiller.com/
>


-- 
Robert L. Miller
http://www.robertlmiller.com/

--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: sage-2.2-alpha

2007-02-27 Thread Jaap Spies

William Stein wrote:
> Hi,
> 
> I've put sage-2.2.alpha3 here:
> 
>http://sage.math.washington.edu/home/was/pkgs/
> 
> Any build feedback will be appreciated (except on sage.math -- I'm 
> already building there...)
> 

real60m19.773s
user50m27.940s
sys 7m7.225s
To install gap, gp, singular, etc., scripts
in a standard bin directory, start sage and
type e.g., install_scripts('/usr/local/bin')
at the command prompt.

SAGE build/upgrade complete!
[EMAIL PROTECTED] sage-2.2.alpha3]$

--
All tests passed!
Total time for all tests: 914.4 seconds
[EMAIL PROTECTED] sage-2.2.alpha3]$

Cheers,

Jaap



--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---



[sage-devel] Re: Multivariate polynomial benchmark

2007-02-27 Thread Martin Albrecht

Hi,

I have a couple of suggestions for the benchmarks. 

> I would suggest something along the following lines for the MVPoly
> benchmark:
>
> Have a matrix of test cases:
>
> *number of indeterminates:
>  - small (3 indeterminates)
>  - medium (10 indeterminates)
>  - large (25 indeterminates)

Are you talking about the number of variables in the ring or in a single 
polynomial here? In either case I think 25 is pretty small. At least for the 
applications I have in mind, a factor of 10 - 100 is considered reasonably 
large.

> *length of polynomials
>  - small (3 monoids)
>  - medium (10 monoids)
>  - large (25 monoids)

Again, I think 25 is pretty smart. Consider e.g. polynomials during a GB 
calculations, these can blow up significantly. However, you don't e.g. 
multiply those polynomials in a GB calculation anyway. 

> * Ring/Algebra
>  - Z (no very interesting - at least to me :))
>  - Z2 (special case for certain fields like crypto research - I do
> care about that)

 + the QuotientRing modulo the "field ideal".

>  - Zp small, i.e. p=32003
>  - Zp huge  i.e. p=something prime beyond 2^32 (not everybody has that
> implemented, at least not efficiently)
>  - Q small
>  - Q large
>  - Weyl Algebras, non-commutative Algebras in general - rather exotic,
> not present everywhere
> * Term Ordering - not sure about the impact of that - this might
> disappear:
>  - DegRevLex
>  - DegLex
>  - Lex
>  - Ordering Matrix
>
> Depending on how many options you select and after computing the
> cartesian product of all selected options you end up with lots of
> tests. To graph the result I really like what was done for FLINT by
> plotting the cases in a plane with colored circles of different size
> signifying who was faster by what factor.
>
> Operations:
>  - addition
>  - mutiply
>  - power small
>  - power huge
>  - GCD
>
> adding should be pretty boring,  small powers are probably, too. Can
> anybody else suggest more operations?

How about operations on monomials like LCM? Also, S-polynomials (even though 
probably fairly boring) are relevant. 

If I had to choose a benchmarking suite, I would step through lets say F4 and 
take every monomial or polynomial operation and benchmark that ... well, we 
should probably step through some classical Buchberger to include reductions.

Martin

PS: Dear list, please let us know if you are not interested in this discussion 
anymore and we take it off list.

-- 
name: Martin Albrecht
_pgp: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x8EF0DC99
_www: http://www.informatik.uni-bremen.de/~malb
_jab: [EMAIL PROTECTED]


--~--~-~--~~~---~--~~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://sage.scipy.org/sage/ and http://modular.math.washington.edu/sage/
-~--~~~~--~~--~--~---