Are you running any firewall software?
Sent from my phone. No type good.
On May 25, 2011, at 10:41 PM, "Jagannath Mondal"
wrote:
> Hi,
> I am having a problem in running mpirun over multiple nodes.
> To run a job over two 8-core processors, I generated a hostfile as follows:
> yethiraj30
Hi,
I am having a problem in running mpirun over multiple nodes.
To run a job over two 8-core processors, I generated a hostfile as follows:
yethiraj30 slots=8 max_slots=8
yethiraj31 slots=8 max_slots=8
These two machines are intra-connected and I have installed openmpi 1.3.3.
Then If I try t
George --
When I run 10 copies on the same node with btl tcp,self (no sm or openib),
valgrind reports the following to me (using ompi-1.4 branch HEAD):
==23753== Invalid write of size 1
==23753==at 0x4C6EA31: non_overlap_copy_content_same_ddt (dt_copy.h:170)
==23753==by 0x4C6CC3B: ompi_d
I'm afraid that we won't have an answer until our Windows guy comes back from
vacation. Sorry! :-(
On May 23, 2011, at 5:45 AM, AMARNATH, Balachandar wrote:
> Hi,
>
> I still don’t understand why the command is trying to open a configuration
> file from a non-existing location. For me its w
On May 24, 2011, at 4:42 AM, francoise.r...@obs.ujf-grenoble.fr wrote:
>> CALL MPI_COMM_SIZE(id%COMM, id%NPROCS, IERR )
>> IF ( id%PAR .eq. 0 ) THEN
>> IF ( id%MYID .eq. MASTER ) THEN
>>color = MPI_UNDEFINED
>> ELSE
>>color = 0
>> END IF
>>
This looks like your installation is busted somehow. Can you send all the
information listed here:
http://www.open-mpi.org/community/help/
On May 24, 2011, at 4:05 PM, charles reid wrote:
> Hi -
>
> I'm trying to compile a simple hello world program with mpicc,
>
> $ cat test.c
> #incl
On May 24, 2011, at 7:29 AM, Salvatore Podda wrote:
>> Yes, it was a typo, I use to add the "sm" flag to the "--mca btl"
>> option. However I think this is not mandatory, as I suppose
>> openmpi use the the so-called "Law of Least Astonishment"
>> also in this case and adopts "sm" for the intra-no
I've tried on my home Ubuntu 10.04, 64 bit version. It crashes with number
of ranks 5-7, 9 and greater. I simply downloaded 1.4.3 version (
http://www.open-mpi.org/software/ompi/v1.4/downloads/openmpi-1.4.3.tar.gz):
- configure --prefix=`pwd`/install && make install
- cd ~/projects/gather
- ~/proj
Andrew,
I have a 8 octo-core nodes running under Caos NSA release 1.0.29 (Cato)
2009.11.13, connected with IB. I run your test one process per core, with
different distributions and all gave the same result.
george.
On May 25, 2011, at 14:35 , Andrew Senin wrote:
> Hi George,
>
> Thanks a
Not exactly. I have 16 core nodes. Even if I run all 9 ranks on the same
node it fails (with --mca btl sm,self). I also tried running on different
nodes (3 nodes, 3 ranks each on each node) with openib and tcp - the same
effect. Also as I wrote in another message I could see this effect on vbox
wit
Hi George,
Thanks a lot for your attempt! Possibly this is something OS specific? I'm
using CentOS release 5.4 x86_64 on the cluster. I also tried it on my
virtual box with CentOS 5.3 x86_64 (ompi 1.4.3). The same effect. On what OS
did you try? If it helps I can upload the virtual box image on m
On Wednesday, May 25, 2011 01:16:04 PM Andrew Senin wrote:
> Hello list,
>
> I have an application which uses MPI_Allgather with derived types. It works
> correctly with mpich2 and mvapich2. However it crashes periodically with
> openmpi2. After investigation I found that the crash takes place whe
Andrew,
I tried with a freshly installed 1.4.3 but I can't reproduce your issue. I
tried with the 1.5 and the trunk and all complete your code without errors. Not
even valgrind found anything to complain about ...
george.
On May 25, 2011, at 08:22 , Andrew Senin wrote:
> Sorry. I'm using O
Sorry. I'm using OpenMPI 1.4.3.
Thanks,
-Andrew
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Peter Kjellstrom
> Sent: Wednesday, May 25, 2011 4:19 PM
> To: us...@open-mpi.org
> Subject: Re: [OMPI users] MPI_Allgather with deriv
On Wednesday, May 25, 2011 01:16:04 PM Andrew Senin wrote:
> Hello list,
>
> I have an application which uses MPI_Allgather with derived types. It works
> correctly with mpich2 and mvapich2. However it crashes periodically with
> openmpi2.
Which version of OpenMPI are you using? There is no such
Hello list,
I have an application which uses MPI_Allgather with derived types. It works
correctly with mpich2 and mvapich2. However it crashes periodically with
openmpi2. After investigation I found that the crash takes place when I use
derived datatypes with MPI_AllGather and number of ranks g
Title: Quepasa.com
Click here to unsubscribe.
I’d like to be your friend on Quepasa.com.
Would you like to add
me as a friend?
Yes
No
Thanks!
sai sudheesh
17 matches
Mail list logo