Hello all,
I have a very basic question regarding MPI communication.
In my Task, what i am doing is..
Comparing Scatterv and MPIO.
1) In scatterv, I scatter all the data to the other ranks and SCAN for the
specific characters.
MPI_Scatterv (chunk, send_counts, displacements, MPI_CHAR, copychunk,
Hi all,
Ran into a problem running the openshmem examples/ using OpenMPI 1.8
compiled with
--with-knem=/opt/knem-1.1.90mlnx2 --with-hcoll=/opt/mellanox/hcoll
--with-mxm=/opt/mellanox/mxm
--with-fca=/opt/mellanox/fca
lib/openmpi/mca_coll_hcoll.so has undefined symbol
hcoll_group_destroy_notify
I
latest rocks 6.2 carry this version only
On Tue, Apr 8, 2014 at 3:49 AM, Jeff Squyres (jsquyres)
wrote:
> Open MPI 1.4.3 is *ancient*. Please upgrade -- we just released Open MPI
> 1.8 last week.
>
> Also, please look at this FAQ entry -- it steps you through a lot of basic
> troubleshooting s
and thank you very much
On Tue, Apr 8, 2014 at 3:07 PM, Nisha Dhankher -M.Tech(CSE) <
nishadhankher-coaese...@pau.edu> wrote:
> latest rocks 6.2 carry this version only
>
>
> On Tue, Apr 8, 2014 at 3:49 AM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
>
>> Open MPI 1.4.3 is *ancient*.
Yes, usually the MPI libraries don't allow that. You can launch
another thread for the computation, make calls to MPI_Test during that
time and join at the end.
Cheers,
2014-04-07 4:12 GMT+01:00 Zehan Cui :
> Hi Matthieu,
>
> Thanks for your suggestion. I tried MPI_Waitall(), but the results are
You should ping the Rocks maintainers and ask them to upgrade. Open MPI 1.4.3
was released in September of 2010.
On Apr 8, 2014, at 5:37 AM, Nisha Dhankher -M.Tech(CSE)
wrote:
> latest rocks 6.2 carry this version only
>
>
> On Tue, Apr 8, 2014 at 3:49 AM, Jeff Squyres (jsquyres)
> wrot
Hello,
I think that the MPI open its sockets even though the number of processor
is only 1 on the same machine?
regards.
On Tue, Apr 8, 2014 at 9:43 AM, Hamid Saeed wrote:
> Hello all,
>
> I have a very basic question regarding MPI communication.
>
> In my Task, what i am doing is..
> Compari
If your examples are anything more than trivial code, we'll probably need a
signed contribution agreement. This is a bit of a hassle, but it's an
unfortunate necessity so that we can ensure that the Open MPI code base stays
100% open source / unencumbered by IP restrictions:
http://www.ope
Can someone kindly reply?
On Tue, Apr 8, 2014 at 1:01 PM, Hamid Saeed wrote:
> Hello,
> I think that the MPI open its sockets even though the number of processor
> is only 1 on the same machine?
> regards.
>
>
> On Tue, Apr 8, 2014 at 9:43 AM, Hamid Saeed wrote:
>
>> Hello all,
>>
>> I have a
Hi,
What MOFED version are you running?
Best,
Josh
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Anthony Alba
Sent: Tuesday, April 08, 2014 4:53 AM
To: us...@open-mpi.org
Subject: [OMPI users] mca_coll_hcoll.so: undefined symbol
hcoll_group_destroy_notify
Hi all,
Ran into a pr
I suspect it all depends on when you start the clock. If the data is sitting in
the file at time=0, then the file I/O method will likely be faster as every
proc just reads its data in parallel - no comm required as it is all handled by
the parallel file system.
I confess I don't quite understan
Yes i meant Parallel file system.
And can you kindly explain what exactly happens if
the RANK0 want to send to RANK0?
Why does MPIO is different in time consumption than RANK0 to RANK0
communication?
On Tue, Apr 8, 2014 at 4:45 PM, Ralph Castain wrote:
> I suspect it all depends on when you s
On Apr 8, 2014, at 8:05 AM, Hamid Saeed wrote:
> Yes i meant Parallel file system.
>
> And can you kindly explain what exactly happens if
> the RANK0 want to send to RANK0?
It goes thru the "self" BTL, which is pretty fast but does require a little
time You also have the collective operation
In general, benchmarking is very hard.
For example, you almost certainly want to do some "warmup" communications of
the pattern that you're going to measure. This gets all communications setup,
resources allocated, caches warmed up, etc.
That is, there's generally some one-time setup that happ
On 04/08/2014 06:37 AM, Jeff Squyres (jsquyres) wrote:
You should ping the Rocks maintainers and ask them to upgrade.
Open MPI 1.4.3 was released in September of 2010.
On Rocks, you can install OpenMPI from source (and any other software
application by the way) on their standard NFS shared d
Hello,
Recently a couple of our users have experienced difficulties with compute jobs
failing with OpenMPI 1.6.4 compiled against GCC 4.7.2, with the nodes running
kernel 2.6.32-279.5.2.el6.x86_64. The error is:
File locking failed in ADIOI_Set_lock(fd 7,cmd F_SETLKW/7,type F_WRLCK/1,whence
0
Joshua,
I am running MOFED 2.1-1.0.6 and self-compiled openmpi-1.8 using
--with-hcoll.
The symbol is in 1.8 source but not exported by MOFED
/opt/mellanox/hcoll/lib*
On 8 Apr 2014 21:47, "Joshua Ladd" wrote:
> Hi,
>
>
>
> What MOFED version are you running?
>
>
>
> Best,
>
>
>
> Josh
>
>
>
> *F
Thanks, it looks like I have to do the overlapping myself.
On Tue, Apr 8, 2014 at 5:40 PM, Matthieu Brucher wrote:
> Yes, usually the MPI libraries don't allow that. You can launch
> another thread for the computation, make calls to MPI_Test during that
> time and join at the end.
>
> Cheers,
>
This is a change from OMPI 1.7.4 to 1.7.5, 1.8: the symbol is not used in
MOFED 2.1-1.0.6 openmpi-1.7.4 (I rebuilt the MOFED RPM to enable hcoll).
- Anthony
Yes, I think I can sign this agreement. I was anyway planning to put them
up on github with Apache license.
Yes for FAQ as well. I will try to send some along with the code samples.
Thank you,
Saliya
On Tue, Apr 8, 2014 at 9:06 AM, Jeff Squyres (jsquyres)
wrote:
> If your examples are anything
The devel list has responded that this requires a later drop of hcoll than
in MOFED 2.1-1.0.6.
- Anthony
On Apr 9, 2014 9:49 AM, "Anthony Alba" wrote:
> This is a change from OMPI 1.7.4 to 1.7.5, 1.8: the symbol is not used in
> MOFED 2.1-1.0.6 openmpi-1.7.4 (I rebuilt the MOFED RPM to enable hc
21 matches
Mail list logo