Re: [OMPI users] OpenMPI and OAR issues

2008-11-06 Thread Andrea Pellegrini
Thanks guys! I finally fixed my problem!!! apellegr@m45-039:~$ mpirun -prefix ~/openmpi -machinefile $OAR_FILE_NODES -mca pls_rsh_assume_same_shell 0 -mca pls_rsh_agent "oarsh" -np 2 /n/poolfs/z/home/apellegr/mpi_test/hello_world.x86 Warning: Permanently added '[m45-039.pool]:6667' (RSA) to the

Re: [OMPI users] OpenMPI and OAR issues

2008-11-06 Thread Ralph Castain
Thanks for the OAR explanation! Sorry - I should have been clearer in my comment. I was trying to indicate that the cmd starting with "set" is indicating a bash syntax error, and that is why the launch fails. The rsh launcher uses a little "probe" technique to try and guess the remote she

Re: [OMPI users] OpenMPI and OAR issues

2008-11-06 Thread Jeff Squyres
OMPI assumes (for faster startup) that your local shell is the same as your remote shell. If that's not the case, try setting pls_rsh_assume_same_shell to 0. On Nov 6, 2008, at 3:31 PM, George Bosilca wrote: OAR is the batch scheduler used on the Grid5K platform. As far as I know, set is

Re: [OMPI users] OpenMPI and OAR issues

2008-11-06 Thread George Bosilca
OAR is the batch scheduler used on the Grid5K platform. As far as I know, set is a basic shell internal command, and it is understood by all shells. The problem here seems to be that somehow we're using bash, but with a tcsh shell code (because setenv is definitively not something that bash

Re: [OMPI users] OpenMPI and OAR issues

2008-11-06 Thread Ralph Castain
I have no idea what "oar" is, but it looks to me like the rsh launcher is getting confused about the remote shell it will use - I don't believe that the "set" cmd shown below is proper bash syntax, and that is the error that is causing the launch to fail. What remote shell should it fine? I

[OMPI users] OpenMPI and OAR issues

2008-11-06 Thread Andrea Pellegrini
Hi all, I'm trying to run an openmpi application on a oar cluster. I think the cluster is configured correctly but I still have problems when I run mpirun: apellegr@m45-037:~$ mpirun -prefix /n/poolfs/z/home/apellegr/openmpi -machinefile $OAR_FILE_NODES -mca pls_rsh_agent "oarsh" -np 10 /n/p