On Nov 23, 3:54 am, Jason Grout <jason-s...@creativetrax.com> wrote: > On 11/22/10 1:48 PM, Ethan Van Andel wrote: > > > In my development, I'm attempting to parallelize some code. However, > > the bottleneck is a call to numpy.linalg.lapack_lite.zgesv, that is > > the point where numpy calls LAPACK to solve my complex system of > > linear equations. Ideally I'd like to parallelize this, and I know the > > ScaLAPACK has a parallel version, pzgesv. Is ScaLAPACK supported by > > Sage, or are there any plans to support it in the future? If not, I'll > > have to look for workarounds. > > I would definitely start by asking on the numpy list about this. It > sounds interesting.
AFAIK there's no existing, direct ScaLAPACK wrapper for Python. Perhaps I need to do something about this in spring for my research (in particular if I can get somebody to join me). There might be something you could use in petsc4py. NOTE that ScaLAPACK is a very different beast from LAPACK and not something you can drop in to magically get parallelization. ScaLAPACK is meant for use on an MPI cluster. You need to manually write parallel code. If you can do with a single node, a much easier approach is to use a parallel version of BLAS/LAPACK (commercial LAPACK's support this, not sure what the current status of ATLAS is). You can compile NumPy against a commercial LAPACK and have all numerical/double precision floating point linear algebra just run faster without any extra effort. E.g., the Enthought Python Distribution does this for a six- fold speed increase upon ATLAS. Dag Sverre -- To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel+unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URL: http://www.sagemath.org