[OMPI users] openmpi 1.2.1 on PK 2.6, and 64 bit version out

2007-05-11 Thread Michael Creel
I've released a 64 bit version of parallelknoppix, a live CD that lets you set 
up a cluster for MPI-based parallel computing in about 10 minutes, without 
installing anything to the computers in the cluster. openmpi v1.2.1 (and lam and 
mpich) are all supported. Test reports are welcome, since this is an initial 
release and I have limited hardware to test it on. See www.parallelknoppix.net 
for more information.


Re: [OMPI users] Problem running hpcc with a threaded BLAS

2007-05-11 Thread Götz Waschk

On 4/27/07, Götz Waschk  wrote:

I'm testing my new cluster installation with the hpcc benchmark and
openmpi 1.2.1 on RHEL5 32 bit. I have some trouble with using a
threaded BLAS implementation. I have tried ATLAS 3.7.30 compiled with
pthread support. It crashes as reported here:

[...]

I have a problem with Goto BLAS 1.14 too, the output of hpcc stops
before the HPL run, then the hpcc processes seem to do nothing,
consuming 100% CPU. If I set the maximum number of threads for Goto
BLAS to 1, hpcc is working fine again.


Hi,

replying to myself here. I've tested this a bit more. It is working
fine if I don't start hpcc from a Gridengine job. I think this is not
related to openmpi's Gridengine integration, as the problem persists
if I disable Gridengine integration on the mpirun command line. I'll
keep you informed if I find a solution.

Regards, Götz Waschk

--
AL I:40: Do what thou wilt shall be the whole of the Law.



[OMPI users] torque, and mpiBlast

2007-05-11 Thread Brock Palen

we use torque and tm for spawing mpi jobs on our cluster.
We have a user who will be using mpiBlast.  From their documentation  
they have:


"Running a query on 25 nodes would look like:
mpirun -np 27 mpiblast --config-file=/path/to/mpiblast.conf -p blastn  
-d nt -i blast_query.fas -o blast_results.txt"


So they have 2 extra processes that are for special use that do not  
consume much cpu time thus they say spawn 2  more processes than  
dedicated.


My question is how can I verify that the extra are placed on cpus  
that already have full working process (the other 25) on them?


Is there a way to say place 25 process in the normal slots and rank 0  
and 1, placed ware ever.

Does this make sense?


Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985