On Nov 29, 2008, at 07:35 , pong wrote:

>
> Hi,
>
>     I wonder if SAGE is optimized for multi-core CUPs (people told me
> that many programs don't).

This is not an easy question to answer.  Sage is built from many  
components that were not specifically designed with Sage or  
multiprocessor issues in mind.

Most programmers and algorithm designers, even today, don't think in  
terms of a "tight-coupled" multiprocessor implementation.  Some do  
think of loose coupling.  The distinction is the amount of information  
needed to be shared between the "cooperating processes".  For example,  
Michael and a host of others worked hard to get the Sage build to take  
advantage of multiple processors.  This was not easy, because the  
components come from many sources, and their build process was not  
designed to take advantage of these systems, but it was feasible  
because the different components need very little information from the  
other components (basically, 'make' has to know what the dependencies  
are, so it can find independent builds to run at the same time).

The software that makes up Sage is another matter entirely.  Code does  
not automatically work optimally on a multiprocessor system (whether  
multi-core or multiple single-core chips).  That effort takes a lot of  
work.

> Also, does it matter (in term of speed)
> that SAGE is running on a 64-bit OS vs 32-bit?

This is another very difficult question, that involves the hardware  
design as well as the specific algorithm being implemented.  I think  
it's safe to say that, as a rule of thumb, utilizing 64-bit processes  
only helps if your algorithm needs a *lot* of memory.  There is some  
(minor, I think) advantage in the larger size of the "int" type, but  
for most things that Sage does, I doubt that it counts for much.

> In general, what would
> have to most significant impact on the speed of running SAGE, # of
> cores, OS or simply the clock-rate of the CUP?

I think that the clock rate would have the most impact on today's  
version of Sage.  Most algorithms in this area are compute intensive,  
and not built for multiprocessors.  The OS comes into play for overall  
performance during a long session, where library usage, file system  
access, and other "ancillary" aspects increase in importance.

One reason that the number of cores (or processors) becomes important  
for the kinds of systems that Sage runs on is "sharing": you can run  
one or more heavily compute-bound Sage jobs at the same time you are  
typesetting your thesis, surfing the web, and sending email to Dad  
asking for money (think "Dad == NSF") :-}.

As an aside, if you run Mac OS X, you can use Activity Monitor to see  
what your processors are doing.  Select the "floating CPU window' in  
the Window menu.  This gives you a bar graph, one bar for each  
processor or core, that shows its %usage.  I imagine Linux must have a  
similar gizmo available.

HTH

Justin

--
Justin C. Walker, Curmudgeon at Large
Institute for the Absorption of Federal Funds
-----------
If it weren't for carbon-14, I wouldn't date at all.
-----------



--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-support@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/sage-support
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to