Re: [OMPI users] Debugging memory use of Open MPI

2009-04-16 Thread Eugene Loh
Shaun Jackman wrote: Eugene Loh wrote: What's the rest? I said the shared-memory area is much smaller, but I was confused about which OMPI release I was using. So, the shared-memory area was 128 Mbyte and it was getting mapped in once for each process, and so it was counted doubly. If th

Re: [OMPI users] Debugging memory use of Open MPI

2009-04-16 Thread Shaun Jackman
Eugene Loh wrote: ... What's the rest? I said the shared-memory area is much smaller, but I was confused about which OMPI release I was using. So, the shared-memory area was 128 Mbyte and it was getting mapped in once for each process, and so it was counted doubly. If there are eight proces

Re: [OMPI users] Debugging memory use of Open MPI

2009-04-16 Thread Eugene Loh
Eugene Loh wrote: Shaun Jackman wrote: What's the purpose of the 400 MB that MPI_Init has allocated? It's for... um, I don't know. Let's see... About a third of it appears to be vt_open() -> VTThrd_open() -> VTGen_open which I'm guessing is due to the VampirTrace instrumentation (maybe al

Re: [OMPI users] MPI_Comm_spawn and oreted

2009-04-16 Thread Jerome BENOIT
Thanks for the info. meanwhile I have set: mpi_param_check = 0 in my system-wide configuation file on workers and mpi_param_check = 1 on the master. Jerome Ralph Castain wrote: Thanks! That does indeed help clarify. You should also then configure OMPI with --disable-per-user-config-f

Re: [OMPI users] Intel compiler libraries (was: libnuma issue)

2009-04-16 Thread Francesco Pietra
On Thu, Apr 16, 2009 at 5:37 PM, Jeff Squyres wrote: > On Apr 16, 2009, at 11:29 AM, Francesco Pietra wrote: > >> francesco@tya64:~$ ssh 192.168.1.33 env | sort >> HOME=/home/francesco >> LANG=en_US.UTF-8 >> LOGNAME=francesco >> MAIL=/var/mail/francesco >> PATH=/usr/local/bin:/usr/bin:/bin:/usr/bi

Re: [OMPI users] Intel compiler libraries (was: libnuma issue)

2009-04-16 Thread Francesco Pietra
As a quick answer before I go to study yous and Douglas mail: the desktop toward which I was sshing is a single processor 10-years old desktop with nothing about intel or openmpi. It only has ssh software to run amber procedures on remote machines. I can try your suggested ssh toward a parallel com

Re: [OMPI users] Intel compiler libraries (was: libnuma issue)

2009-04-16 Thread Douglas Guptill
On Thu, Apr 16, 2009 at 05:29:14PM +0200, Francesco Pietra wrote: > On Thu, Apr 16, 2009 at 3:04 PM, Jeff Squyres wrote: ... > Given my inexperience as system analyzer, I assume that I am messing > something. Unfortunately, i was unable to discover where I am messing. > An editor is waiting comple

Re: [OMPI users] Intel compiler libraries (was: libnuma issue)

2009-04-16 Thread Jeff Squyres
On Apr 16, 2009, at 11:29 AM, Francesco Pietra wrote: francesco@tya64:~$ ssh 192.168.1.33 env | sort HOME=/home/francesco LANG=en_US.UTF-8 LOGNAME=francesco MAIL=/var/mail/francesco PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games PWD=/home/francesco SHELL=/bin/bash SHLVL=1 SSH_CLIENT=1

Re: [OMPI users] Intel compiler libraries (was: libnuma issue)

2009-04-16 Thread Francesco Pietra
On Thu, Apr 16, 2009 at 3:04 PM, Jeff Squyres wrote: > I believe that Nysal was referring to > >  ./configure CC=icc CXX=icpc F77=ifort FC=ifort LDFLAGS=-static-intel > --prefix=/usr I have completely removed openmpi-1.2.3 and reinstalled in /usr/local from source on a Tyan S2895. >From my .bash

Re: [OMPI users] MPI_Comm_spawn and oreted

2009-04-16 Thread Ralph Castain
Thanks! That does indeed help clarify. You should also then configure OMPI with --disable-per-user-config- files. MPI procs will automatically look at the default MCA parameter file, which is probably on your master node (wherever mpirun was executed). However, they also look at the user's h

Re: [OMPI users] default current working directory of started application

2009-04-16 Thread Jerome BENOIT
Hi Again, Ralph Castain wrote: Hokay, I can see that. Are you looking for an mca param that specifies a file that might contain config info we should read when starting up the orted? an mca param sound appropriate. Here, so far I can understand, orted is not involved: on my SLURM c

Re: [OMPI users] MPI_Comm_spawn and oreted

2009-04-16 Thread Jerome BENOIT
Hi, thanks for the reply. Ralph Castain wrote: The orteds don't pass anything from MPI_Info to srun during a comm_spawn. What the orteds do is to chdir to the specified wdir before spawning the child process to ensure that the child has the correct working directory, then the orted changes ba

Re: [OMPI users] MPI_Comm_spawn and oreted

2009-04-16 Thread Ralph Castain
The orteds don't pass anything from MPI_Info to srun during a comm_spawn. What the orteds do is to chdir to the specified wdir before spawning the child process to ensure that the child has the correct working directory, then the orted changes back to its default working directory. The or

Re: [OMPI users] default current working directory of started application

2009-04-16 Thread Ralph Castain
Hokay, I can see that. Are you looking for an mca param that specifies a file that might contain config info we should read when starting up the orted? What would this local configuration file look like (e.g., what kind of config directives would you need/want), would you provide it

Re: [OMPI users] default current working directory of started application

2009-04-16 Thread Jerome BENOIT
Hi ! thanks for the reply. On a homeless workers cluster when the workers programs are spawned via MPI_Comm_spaw{,multiple}, it would be nice to set up a default alternative assumed current which is local (rather than global as $HOME) via a local configuration file: you can play with the wdir

Re: [OMPI users] default current working directory of started application

2009-04-16 Thread Ralph Castain
Not currently - could be done if there is a strong enough reason to do so. Generally, though, the -wdir option seems to do the same thing. Is there a problem with it, or some need it doesn't satisfy? On Apr 15, 2009, at 11:00 PM, Jerome BENOIT wrote: Hello List, in FAQ Running MPI jobs, p

[OMPI users] Intel compiler libraries (was: libnuma issue)

2009-04-16 Thread Jeff Squyres
I believe that Nysal was referring to ./configure CC=icc CXX=icpc F77=ifort FC=ifort LDFLAGS=-static- intel --prefix=/usr This method makes editing your shell startup files unnecessary for running on remote nodes, but you'll still need those files sourced for interactive use of the intel

Re: [OMPI users] libnuma issue

2009-04-16 Thread Francesco Pietra
Did not work the way I implemented the suggestion. ./configure CC=/..cce..icc CXX/. cce..icpc F77=/..fce..ifort FC=/..fce..ifort --with-libnuma=/usr --prefix=/usr --enable-static ./configure CC=/..cce..icc CXX/. cce..icpc F77=/..fce..ifort FC=/..fce..ifort --with-libnuma=/usr --prefix=/usr then e

Re: [OMPI users] An mpirun question

2009-04-16 Thread Terry Frankcombe
Hi Min Zhu You need to read about hostfiles and bynode/byslot scheduling. See here: Ciao On Thu, 2009-04-16 at 10:43 +0100, Min Zhu wrote: > Dear all, > > > > I wonder if you could help me with this question. > > I have got

Re: [OMPI users] MPI_Comm_spawn and oreted

2009-04-16 Thread Jerome BENOIT
Hi ! finally I got it: passing the mca key/value `"plm_slurm_args"/"--chdir /local/folder"' does the trick. As a matter of fact, my code pass the MPI_Info key/value `"wdir"/"/local/folder"' to MPI_Comm_spawn as well: the working directories on the nodes of the spawned programs are `nodes:/loc

[OMPI users] An mpirun question

2009-04-16 Thread Min Zhu
Dear all, I wonder if you could help me with this question. I have got 3 Linux servers with 8 processors on each server. If I want to run a job using mpirun command and specify the number of processors to be used on each server. Is there any way to do this? At the moment, I find that I can o

Re: [OMPI users] MPI_Comm_spawn and oreted

2009-04-16 Thread Jerome BENOIT
Hello Again, Jerome BENOIT wrote: Hello List, I have just noticed that, when MPI_Comm_spawn is used to launch programs around, oreted working directory on the nodes is the working directory of the spawnning program: can we ask to oreted to use an another directory ? Changing the working th

[OMPI users] MPI_Comm_spawn and oreted

2009-04-16 Thread Jerome BENOIT
Hello List, I have just noticed that, when MPI_Comm_spawn is used to launch programs around, oreted working directory on the nodes is the working directory of the spawnning program: can we ask to oreted to use an another directory ? Thanks in advance, Jerome

[OMPI users] default current working directory of started application

2009-04-16 Thread Jerome BENOIT
Hello List, in FAQ Running MPI jobs, point 12, we read: -wdir : Set the working directory of the started applications. If not supplied, the current working directory is assumed (or $HOME, if the current working directory does not exist on all nodes). Is there a way to configure the default alte