[OMPI users] Oversubscribing a subset of a machine's cores

2008-02-07 Thread Torje Henriksen
Hi, I have a slightly odd problem, that you might not think is important at all. Anyways, here it goes: I'm using a single eight-core machine. I want to oversubscribe four of the cores and leave the other four idle. My approach is to make a hostfile: localhost slot=4 # shouldn't this li

Re: [OMPI users] MPI_CART_CREATE and Fortran 90 Interface

2008-02-07 Thread Jeff Squyres
Michal - You are absolutely right; sorry about that. I have fixed the bug in the OMPI development trunk which means that it will be incorporated in the upcoming v1.3 series (see https://svn.open-mpi.org/trac/ompi/changeset/17395) . I also filed a change request for the v1.2 branch; if we e

Re: [OMPI users] mpirun, paths and xterm again (xserver problem solved; library problem still there)

2008-02-07 Thread Jeff Squyres
The whole question of how to invoke xterms for gdb via mpirun keeps coming up, so when this thread is done, I'll add a pile of this information to the FAQ. More below. On Feb 6, 2008, at 10:52 AM, jody wrote: I now solved the "ssh" part of my Problem The XServer is being started with the n

Re: [OMPI users] Infinipath context limit

2008-02-07 Thread Daniƫl Mantione
On Wed, 6 Feb 2008, Christian Bell wrote: > Hi Daniel -- > > PSM should determine your node setup and enable shared contexts > accordingly, but it looks like something isn't working right. You > can apply the patch I've attached to this e-mail and things should > work again. Alas, it

Re: [OMPI users] Oversubscribing a subset of a machine's cores

2008-02-07 Thread Joe Landman
Torje Henriksen wrote: [...] Still, all eight cores are being used. I can see why you would want to use all cores, and I can see that oversubscribing a sub-set of the cores might seem silly. My question is, is it possible to do what I want to do without hacking the open mpi code? Could

Re: [OMPI users] mpirun, paths and xterm again (xserver problem solved; library problem still there)

2008-02-07 Thread jody
Hi Jeff > The results of these two commands do seem to contradict each other; > hmm. Just to be absolutely sure, did you cut-n-paste the > LD_LIBRARY_PATH directory output from printenv and try to "ls" it to > ensure that it's completely spelled right, etc.? I suspect that it's > right since you

Re: [OMPI users] mpirun, paths and xterm again (xserver problem solved; library problem still there)

2008-02-07 Thread Jeff Squyres
On Feb 7, 2008, at 10:07 AM, jody wrote: I wrote a little command called envliblist which consists of this line: printenv | grep PATH | gawk -F "_PATH=" '{ print $2 }' | gawk -F ":" '{ print $1 }' | xargs ls -al When i do mpirun -np 5 -hostfile testhosts -x DISPLAY xterm -hold -e ./ envlibli

Re: [OMPI users] bug in MPI_ACCUMULATE for window offsets > 2**31 - 1 bytes? openmpi v1.2.5

2008-02-07 Thread Tim Prins
Hi Stefan, I was able to verify the problem. Turns out this is a problem with other onesided operations as well. Attached is a simple test case I made in c using MPI_Put that also fails. The problem is that the target count and displacements are both sent as signed 32 bit integers. Then, the

Re: [OMPI users] process placement with toruqe and OpenMP

2008-02-07 Thread Tim Prins
Hi Brock, As far as I know there is no way to do this with Open MPI and torque. I believe people usually use hostfiles to do this sort of thing, but hostfiles do not work with torque. You may want to look into the launcher commands to see if torque will do it for you. Slurm has an option '--

Re: [OMPI users] Bad behavior in Allgatherv when a count is 0

2008-02-07 Thread Tim Mattox
Kenneth, Have you tried the 1.2.5 version? There were some fixes to the vector collectives that could have resolved your problem. On Feb 4, 2008 5:36 PM, George Bosilca wrote: > Kenneth, > > I cannot replicate this weird behavior with the current version in the > trunk. I guess it has been fixed

Re: [OMPI users] bug in MPI_ACCUMULATE for window offsets > 2**31 - 1 bytes? openmpi v1.2.5

2008-02-07 Thread Tim Prins
The fix I previously sent to the list has been committed in r17400. Thanks, Tim Tim Prins wrote: Hi Stefan, I was able to verify the problem. Turns out this is a problem with other onesided operations as well. Attached is a simple test case I made in c using MPI_Put that also fails. The p

Re: [OMPI users] openmpi credits for eager messages

2008-02-07 Thread Jeff Squyres
What I missed in this whole conversation is that the pieces of text that Ron and Dick are citing are *on the same page* in the MPI spec; they're not disparate parts of the spec that accidentally overlap in discussion scope. Specifically, it says: Resource limitations Any pending com

Re: [OMPI users] Can't compile C++ program with extern "C" { #include mpi.h }

2008-02-07 Thread Adam C Powell IV
On Wed, 2008-01-30 at 21:21 -0500, Jeff Squyres wrote: > On Jan 30, 2008, at 5:35 PM, Adam C Powell IV wrote: > > > With no reply in a couple of weeks, I'm wondering if my previous > > message > > got dropped. (Then again, my previous message was a couple of weeks > > late in replying to its pr