Galen Shipman wrote:
We have found a potential issue with BPROC that may effect Open MPI.
Open MPI by default uses PTYs for I/O forwarding, if PTYs aren't
setup on the compute nodes, Open MPI will revert to using pipes.
Recently (today) we found a potential issue with PTYs and BPROC. A
simp
Title: Re: [OMPI users] x11 forwarding
I don't think that that is the problem. As far as I can tell, the
DISPLAY environment variable is being set properly on the slave (it
will sometimes have a different value than in the shell where mpirun
was executed).
Dave
Ralph H Castain wrote:
Actually, I believe at least some of this may be a bug on our part. We
currently pickup the local environment and forward it on to the remote nodes
as the environment for use by the backend processes. I have seen quite a few
environment variables in that list, including DISPLAY, which would create
I'm using caos linux (developed at LBL), which has the wrapper wwmpirun
around mpirun, so my command is something like
wwmpirun -np 8 -- -x PYTHONPATH --mca pls_rsh_agent '"ssh -X"'
/usr/local/bin/pyMPI
This is essentially the same as
mpirun -np 8 -x PYTHONPATH --mca pls_rsh_agent '"ssh -X"'
/
Looking VERY briefly at the GAMMA API here:
http://www.disi.unige.it/project/gamma/gamma_api.html
It looks like one could create a GAMMA BTL with a minimal amount of
trouble.
I would encourage your group to do this!
There is quite a bit of information regarding the BTL interface, and
for GA
what does your command line look like?
- Galen
On Nov 29, 2006, at 7:53 PM, Dave Grote wrote:
I cannot get X11 forwarding to work using mpirun. I've tried all of
the
standard methods, such as setting pls_rsh_agent = ssh -X, using xhost,
and a few other things, but nothing works in general.