Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-11-29 Thread Brian W. Barrett
It would be difficult to do well without some MPI help, in my opinion. You certainly could use the Linux processor affinity API directly in the MPI application. But how would the process know which core to bind to? It could wait until after MPI_INIT and call MPI_COMM_RANK, but MPI implem

Re: [OMPI users] Pernode request

2006-11-29 Thread Maestas, Christopher Daniel
Ralph, Thanks for the feedback. Glad we are clearing these things up. :-) So here's what osc mpiexec is doing now: --- -pernode : allocate only one process per compute node -npernode : allocate no more than processes per compute node --- > Cdm> I think I originally requested the -pernode

Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-11-29 Thread Durga Choudhury
Brian But does it matter which core the process gets bound to? They are all identical, and as long as the task is parallelized in equal chunks (that's the key part), it should not matter. The last time I had to do this, the problem had to do with real-time processing of a very large radar image.

Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-11-29 Thread Laurent . POREZ
I agree with this solution, for the machinefile. Using mpiexec or a spawn command, you can add the cpu number attached to the hostname : mpiexec -host [hostname]:[cpu number] -n 1 mpi_test or, for MPI_Comm_spawn : MPI_Info_set( mpi_info, "host", "[hostname]:[cpu number]" ); Cheers, Lau

Re: [OMPI users] myirnet problems on OSX

2006-11-29 Thread Scott Atchley
On Nov 21, 2006, at 1:27 PM, Brock Palen wrote: I had sent a message two weeks ago about this problem and talked with jeff at SC06 about how it might not be a OMPI problem. But it appears now working with myricom that it is a problem in both lam-7.1.2 and openmpi-1.1.2/1.1.1. Basically the re

Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-11-29 Thread Jeff Squyres
There's a few issues involved here: - Brian was pointing out that AMDs are NUMA (and Intel may well go NUMA someday -- scaling up to hundreds of cores, unless something quite unexpected happens in terms of computer architectures, simply does not scale in UMA architectures). So each core is

Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-11-29 Thread Jeff Squyres
The real complexity comes in when trying to schedule on particular cores in a machine. For example, we discussed some real-world application examples that wanted to do the following: - Launch 1 MPI process per socket, pinning all cores on the socket to that process - Launch 2 MPI processe

Re: [OMPI users] myirnet problems on OSX

2006-11-29 Thread Scott Atchley
On Nov 29, 2006, at 8:44 AM, Scott Atchley wrote: My last few runs all completed successfully without hanging. The job I am currently running just hung one node (can respond to ping, cannot ssh into it, cannot use any terminals connected to it). There are no messages in dmesg and vmstat shows t

Re: [OMPI users] How to set paffinity on a multi-cpu node?

2006-11-29 Thread Gleb Natapov
On Wed, Nov 29, 2006 at 08:48:48AM -0500, Jeff Squyres wrote: > - There's also the issue that the BIOS determines core/socket order > mapping to Linux virtual processor IDs. Linux virtual processor 0 is > always socket 0, core 0. But what is linux virtual processor 1? Is > it socket 0, cor

Re: [OMPI users] Pernode request

2006-11-29 Thread Ralph Castain
Hi Chris Thanks for the patience and the clarification - much appreciated. In fact, I have someone that needs to learn more about the code base, so I think I will assign this to him. At the least, he will have to learn a lot more about the mapper! I have no problem with modifying the pernode beha

Re: [OMPI users] Pernode request

2006-11-29 Thread Maestas, Christopher Daniel
Ralph, I agree with what you stated in points 1-4. That is what we are looking for. I understand your point now about the non-MPI users too. :-) Thanks, -cdm -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain Sent: Wednesda

[OMPI users] For Open MPI + BPROC users

2006-11-29 Thread Galen Shipman
We have found a potential issue with BPROC that may effect Open MPI. Open MPI by default uses PTYs for I/O forwarding, if PTYs aren't setup on the compute nodes, Open MPI will revert to using pipes. Recently (today) we found a potential issue with PTYs and BPROC. A simple reader/writer usin

[OMPI users] x11 forwarding

2006-11-29 Thread Dave Grote
I cannot get X11 forwarding to work using mpirun. I've tried all of the standard methods, such as setting pls_rsh_agent = ssh -X, using xhost, and a few other things, but nothing works in general. In the FAQ, http://www.open-mpi.org/faq/?category=running#mpirun-gui, a reference is made to oth