Dmitry:
It turns out that by default in Open MPI 1.7, configure enables warnings for
deprecated MPI functionality. In Open MPI 1.6, these warnings were disabled by
default.
That explains why you would not see this issue in the earlier versions of Open
MPI.
I assume that gcc must have added su
So I configured and compiled a simple MPI program.
Now the issue is when I try to do the same thing on my computer on a
corporate network, I get this error:
C:\OpenMPI\openmpi-1.6\installed\bin>mpiexec MPI_Tutorial_1.exe
On Jun 18, 2012, at 11:45 AM, Harald Servat wrote:
>> 2. The two machines need to be able to open TCP connections to each other on
>> random ports.
>
> That will be harder. Do need both machines to open TCP connections to
> random ports, or just one?
Both.
To be specific: there's two layers
El dl 18 de 06 de 2012 a les 11:39 -0400, en/na Jeff Squyres va
escriure:
> On Jun 18, 2012, at 11:12 AM, Harald Servat wrote:
>
> > Thank you Jeff. Now with the following commands starts, but it gets
> > blocked before starting. May be this problem of firewalls? Do I need
> > both that M1 and M2
On Jun 18, 2012, at 11:12 AM, Harald Servat wrote:
> Thank you Jeff. Now with the following commands starts, but it gets
> blocked before starting. May be this problem of firewalls? Do I need
> both that M1 and M2 can log into the other machine through ssh?
I'm not sure what you mean by "blocked"
El dl 18 de 06 de 2012 a les 10:56 -0400, en/na Jeff Squyres va
escriure:
> On Jun 18, 2012, at 10:45 AM, Harald Servat wrote:
>
> > # $HOME/aplic/openmpi/1.6/bin/mpirun -np 1 -host
> > localhost ./init_barrier_fini : -x
> > LD_LIBRARY_PATH=/home/Computational/harald/aplic/openmpi/1.6/lib
> > -pre
Hi Dmitry:
Let me look into this.
Rolf
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf
Of Dmitry N. Mikushin
Sent: Monday, June 18, 2012 10:56 AM
To: Open MPI Users
Cc: Олег Рябков
Subject: Re: [OMPI users] NVCC mpi.h: error: attribute "__deprecated__" does
not ta
On Jun 18, 2012, at 10:45 AM, Harald Servat wrote:
> # $HOME/aplic/openmpi/1.6/bin/mpirun -np 1 -host
> localhost ./init_barrier_fini : -x
> LD_LIBRARY_PATH=/home/Computational/harald/aplic/openmpi/1.6/lib
> -prefix /home/Computational/harald/aplic/openmpi/1.6/ -x
> PATH=/home/Computational/harald
Yeah, definitely. Thank you, Jeff.
- D.
2012/6/18 Jeff Squyres
> On Jun 18, 2012, at 10:41 AM, Dmitry N. Mikushin wrote:
>
> > No, I'm configuring with gcc, and for openmpi-1.6 it works with nvcc
> without a problem.
>
> Then I think Rolf (from Nvidia) should figure this out; I don't have
> acc
On Jun 18, 2012, at 10:41 AM, Dmitry N. Mikushin wrote:
> No, I'm configuring with gcc, and for openmpi-1.6 it works with nvcc without
> a problem.
Then I think Rolf (from Nvidia) should figure this out; I don't have access to
nvcc. :-)
> Actually, nvcc always meant to be more or less compati
Thank you for your answers. I've tried that but it doesn't seem to work.
The latest command I've issued is
# $HOME/aplic/openmpi/1.6/bin/mpirun -np 1 -host
localhost ./init_barrier_fini : -x
LD_LIBRARY_PATH=/home/Computational/harald/aplic/openmpi/1.6/lib
-prefix /home/Computational/harald/aplic
No, I'm configuring with gcc, and for openmpi-1.6 it works with nvcc
without a problem.
Actually, nvcc always meant to be more or less compatible with gcc, as far
as I know. I'm guessing in case of trunk nvcc is the source of the issue.
And with ./configure CC=nvcc etc. it won't build:
/home/d
Did you configure and build Open MPI with nvcc?
I ask because Open MPI should auto-detect whether the underlying compiler can
handle a message argument with the deprecated directive or not.
You should be able to build Open MPI with:
./configure CC=nvcc etc.
make clean all install
I
Hello,
With openmpi svn trunk as of
Repository Root: http://svn.open-mpi.org/svn/ompi
Repository UUID: 63e3feb5-37d5-0310-a306-e8a459e722fe
Revision: 26616
we are observing the following strange issue (see below). How do you think,
is it a problem of NVCC or OpenMPI?
Thanks,
- Dima.
[dmikushin
I believe you could resolve this by specifying the interfaces to use in the
order you want them checked. In other words, you might try this:
-mca btl_tcp_if_include eth1,eth0
where eth1 is the NIC connecting the internal subnet in the cloud, and eth0 is
the NIC connecting them to the Internet.
One further point that I missed in my earlier note: if you are starting the
parent as a singleton, then you are fooling yourself about the "without mpirun"
comment. A singleton immediately starts a local daemon to act as mpirun so that
comm_spawn will work. Otherwise, there is no way to launch t
You might also want to set up your shell startup files on each machine to
reflect the proper PATH and LD_LIBRARY_PATH. E.g., if you have a different
.bashrc on each machine, just have it set PATH and LD_LIBARY_PATH properly *for
that machine*.
To be clear: it's usually easiest to install OMPI
Try adding "-x LD_LIBRARY_PATH=" to your mpirun cmd line
On Jun 18, 2012, at 7:11 AM, Harald Servat wrote:
> Hello list,
>
> I'd like to use OpenMPI to execute an MPI application in two different
> machines.
>
> Up to now, I've configured and installed OpenMPI 1.6 in my two systems
> (each o
Hello list,
I'd like to use OpenMPI to execute an MPI application in two different
machines.
Up to now, I've configured and installed OpenMPI 1.6 in my two systems
(each on a different directory). When I execute binaries within a system
(in any) the application works well. However when I try
On 6/16/2012 8:03 AM, Roland Schulz wrote:
Hi,
I would like to start a single process without mpirun and then use
MPI_Comm_spawn to start up as many processes as required. I don't want
the parent process to take up any resources, so I tried to disconnect
the inter communicator and then finali
Hi Jeff,
Thank you very much for your suggestions. They help us a lot. We will
reconsider the whole model more in details.
I am thinking of separating the whole process into two different kinds of
processes. One is the system process and the other is MPI process, which is
invoked (how to invok
HI,
I'm running openmpi on Rackspace cloud over Internet using MPI_Spawn. IT means,
I run the parent on my PC and the children on Rackspace cloud machines.
Rackspace provides direct IP addresses of the machines (no NAT), that is why it
is possible.
Now, there is a communicator involving only the
22 matches
Mail list logo