and potentially your MPI job)
My sysadmin used official IBM Spectrum packages to install MPI, so It's
quite strange that there are some components missing (pami). Any help?
Thanks
--
Ing. Gabriele Fatigati
HPC specialist
SuperComputing Applications and Innovation Department
Via Magnanel
Hi Reuti, I think is it freely available. I posted also on IBM Spectrum
forum, I'm waiting some reply.
2017-05-18 14:10 GMT+02:00 Reuti :
> Hi,
>
> > Am 18.05.2017 um 14:02 schrieb Gabriele Fatigati :
> >
> > Dear OpenMPI users and developers, I'm using IBM Spec
t;
>
> On 18 May 2017 at 14:10, Reuti wrote:
>
>> Hi,
>>
>> > Am 18.05.2017 um 14:02 schrieb Gabriele Fatigati > >:
>> >
>> > Dear OpenMPI users and developers, I'm using IBM Spectrum MPI 10.1.0
>>
>> I noticed this on IBM'
l
> for example
> ldd a.out
> should only point to IBM libraries
>
> Cheers,
>
> Gilles
>
>
> On Thursday, May 18, 2017, Gabriele Fatigati wrote:
>
>> Dear OpenMPI users and developers, I'm using IBM Spectrum MPI 10.1.0 based
>> on OpenMPI, so I hop
he output ?
>
>
> Cheers,
>
> Gilles
>
> On 5/18/2017 10:41 PM, Gabriele Fatigati wrote:
>
>> Hi Gilles, attached the requested info
>>
>> 2017-05-18 15:04 GMT+02:00 Gilles Gouaillardet <
>> gilles.gouaillar...@gmail.com <mailto:gilles.gouaill
correct drivers or libraries loaded.
>
> I have had similar messages when using Infiniband on x86 systems - which
> did not have libibverbs installed.
>
>
> On 19 May 2017 at 08:41, Gabriele Fatigati wrote:
>
>> Hi Gilles, using your command:
>>
>> [openpower:88
-- Forwarded message --
From: Gabriele Fatigati
Date: 2017-05-19 9:07 GMT+02:00
Subject: Re: [OMPI users] IBM Spectrum MPI problem
To: John Hearns
If I understand well, when I launch mpirun by default try to use
Infiniband, but because there are no infiniband module the run
t work, can run and post the logs)
>
> mpirun --mca pml ^pami --mca pml_base_verbose 100 ...
>
>
> Cheers,
>
>
> Gilles
>
>
> On 5/19/2017 4:01 PM, Gabriele Fatigati wrote:
>
>> Hi John,
>> Infiniband is not used, there is a single node on this mach
parameter "orte_base_help_aggregate" to 0 to see
all help / error messages
[openpower:88867] 1 more process has sent help message help-mpi-runtime.txt
/ mpi_init:startup:pml-add-procs-fail
2017-05-19 9:22 GMT+02:00 Gabriele Fatigati :
> Hi GIlles,
>
> using your command with one MPI procs
e the physical interface cards in
>>> these systems, but you do not have the correct drivers or
>>> libraries loaded.
>>>
>>> I have had similar messages when using Infiniband on x86 systems -
>>> which did not have libibverbs installed
>
>
> Cheers,
>
>
> Gilles
>
>
>
> On 5/19/2017 4:23 PM, Gabriele Fatigati wrote:
>
>> Oh no, by using two procs:
>>
>>
>> findActiveDevices Error
>> We found
; Gilles
>
> On 5/19/2017 4:28 PM, Gabriele Fatigati wrote:
>
>> Using:
>>
>> mpirun --mca pml ^pami --mca pml_base_verbose 100 -n 2 ./prova_mpi
>>
>> I attach the output
>>
>> 2017-05-19 9:16 GMT+02:00 John Hearns via users > <mailto:users@l
to is that
>> their license manager is blocking you from running, albeit without a really
>> nice error message. I’m sure that’s something they are working on.
>>
>> If you really want to use Spectrum MPI, I suggest you contact them about
>> purchasing it.
>>
>
enMPI. I suspect the
same for others collective communications. Someone can explaine me why
MPI_reduce has this strange behaviour?
Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casale
gt; Ph: (+61) 0417 163 509Skype: terry.frankcombe
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomp
National University
> Ph: (+61) 0417 163 509Skype: terry.frankcombe
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA S
gt;
> On Sep 9, 2010, at 1:14 AM, Gabriele Fatigati wrote:
>
> More in depth,
>
> total execution time without Barrier is about 1 sec.
>
> Total execution time with Barrier+Reduce is 9453, with 128 procs.
>
> 2010/9/9 Terry Frankcombe
>
>> Gabriele,
>>
gt; Yes, however, it seems Gabriele is saying the total execution time
> *drops* by ~500 s when the barrier is put *in*. (Is that the right way
> around, Gabriele?)
>
> That's harder to explain as a sync issue.
>
>
>
> > On Sep 9, 2010, at 1:14 AM, Gabriele Fatigat
collective routine, performances can have very different
behaviour.
Thanks a lot.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 61717
Thanks Jeff,
and.. what about RDMA? It works only with point-to-point or also with
collectives?
2010/9/22 Jeff Squyres
> On Sep 22, 2010, at 3:46 AM, Gabriele Fatigati wrote:
>
> > i'm tuning collectives of OpenMPI 1.4.2 with OTPO. I have a little
> question about BTL. Th
collective more time on one
communicator, but is it possible with different collectives?
Thanks a lot.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it
ocess
MPI_IBcast(MPI_COMM_WORLD, request_1) // second Bcast for another process
Because first Bcast of second process matches with first Bcast of first
process, and it's wrong.
Is it right?
2010/9/23 Jeff Squyres
> On Sep 23, 2010, at 6:28 AM, Gabriele Fatigati wrote:
>
>
;
> From:
> Jeff Squyres
> To: Open MPI Users Date: 09/23/2010 10:13 AM Subject: Re:
> [OMPI users] Question about Asynchronous collectives Sent by:
> users-boun...@open-mpi.org
> --
>
>
>
> On Sep 23, 2010, at 10:00 AM, Gabriele Fatigati wrote:
>
> &
Dear OpenMPI users,
if OpenMPI is numa-compiled, memory affinity is enabled by default? Because
I didn't find memory affinity alone ( similar) parameter to set at 1.
Thanks a lot.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomput
Sorry,
memory affinity is enabled by default setting mprocessor_affinity=1 in
OpenMPI-numa?
2010/9/27 Gabriele Fatigati
> Dear OpenMPI users,
>
> if OpenMPI is numa-compiled, memory affinity is enabled by default? Because
> I didn't find memory affinity alone ( similar) para
finity? This is my question..
2010/9/27 Tim Prince
>
> On 9/27/2010 9:01 AM, Gabriele Fatigati wrote:
>>
>> if OpenMPI is numa-compiled, memory affinity is enabled by default? Because
>> I didn't find memory affinity alone ( similar) parameter to set at 1.
/a.out a b "c d"
Argument parsing doesn't work well. Arguments passed are:
a b c d
and not
a b "c d"
I think there is an issue in parsing the arguments invoking Totalview. Is
this a bug into mpirun or i need to do it in other way?
Thanks in forward.
--
Ing. Gabrie
Mm,
doing as you suggest the output is:
a
b
"c
d"
and not:
a
b
"c d"
2011/1/27 Reuti
> Hi,
>
> Am 27.01.2011 um 09:48 schrieb Gabriele Fatigati:
>
> > Dear OpenMPI users and developers,
> >
> > i'm using OpenMPI 1.4.3 and Intel compil
The problem is how mpirun scan input parameters when Totalview is invoked.
There is some wrong behaviour in the middle :(
2011/1/27 Reuti
> Am 27.01.2011 um 10:32 schrieb Gabriele Fatigati:
>
> > Mm,
> >
> > doing as you suggest the output is:
> >
> > a
es are lost in the process.
>
> Just start your debugged job with "totalview mpirun ..." and it should work
> fine.
>
> On Jan 27, 2011, at 3:00 AM, Gabriele Fatigati wrote:
>
> The problem is how mpirun scan input parameters when Totalview is invoked.
>
> The
>
> > Just start your debugged job with "totalview mpirun ..." and it should
> work fine.
> >
> > On Jan 27, 2011, at 3:00 AM, Gabriele Fatigati wrote:
> >
> >> The problem is how mpirun scan input parameters when Totalview is
> invoked.
> >
u try the attached
> patch to a trunk nightly tarball and see if that works for you?
>
> If it does, I can provide patches for v1.4 and v1.5 (the code moved a bit
> between these 3 versions, so I would need to adapt the patches a little).
>
>
>
> On Jan 27, 2011, at 9:06
Good!
Thanks for your support!
Regards.
2011/1/28 Jeff Squyres
> Thanks for the confirmation.
>
> I committed the fix to the trunk as of r24322 and filed CMR's for v1.4 and
> v1.5.
>
>
>
> On Jan 28, 2011, at 2:50 AM, Gabriele Fatigati wrote:
>
> > Hi
with Totalview, the problem appears in a
line 188 of ompi/mca/io/romio/romio/adio/ad_nfs/ad_nfs_read.c:
MPI_Type_size(fd->filetype, &filetype_size);
here there is an explicit cast to int that can causes the problem.
Someone can help me?
Thanks in forward.
--
Ing. Gabriele Fatiga
Dear OpenMPi users,
is there some portable MPI macro to check if a code is compiled with MPI
compiler? Something like _OPENMP for OpenMP codes:
#ifdef _OPENMP
#endif
it exist?
#ifdef MPI
#endif
Thanks
--
Ing. Gabriele Fatigati
HPC specialist
SuperComputing Applications and
r portability issue. :-\
>
> On Aug 23, 2011, at 5:19 AM, Gabriele Fatigati wrote:
>
> > Dear OpenMPi users,
> >
> > is there some portable MPI macro to check if a code is compiled with MPI
> compiler? Something like _OPENMP for OpenMP codes:
> >
> > #ifdef _O
Dear OpenMPI users and developers,
is there some limitation or issues to use memory mapped memory into MPI
processes? I would like to share some memory in a node without using OpenM.
Thanks a lot.
--
Ing. Gabriele Fatigati
HPC specialist
SuperComputing Applications and Innovation Department
More in detail,
is it possible use mmap() function from MPI process and sharing these memory
between others processes?
2011/10/13 Gabriele Fatigati
> Dear OpenMPI users and developers,
>
> is there some limitation or issues to use memory mapped memory into MPI
> processes? I w
al jump or move depends on uninitialised value(s)
==19931==at 0x4A06E5C: strcmp (mc_replace_strmem.c:412)
The same warning is not present if I use MAX_STRING_LEN+1 in MPI_Allgather.
Thanks in forward.
--
Ing. Gabriele Fatigati
HPC specialist
SuperComputing Applications and Innovation Dep
Dear OpenMPi users/developers,
anybody can help about such problem?
2012/1/13 Gabriele Fatigati
> Dear OpenMPI,
>
> using MPI_Allgather with MPI_CHAR type, I have a doubt about
> null-terminated character. Imaging I want to spawn node names where my
> progra
MPI_Comm_size(MPI_COMM_WORLD, &size);
>
>gethostname(hostname, MAX_LEN - 1);
>where_null(hostname, MAX_LEN, rank);
>
>hostname_recv_buf = calloc(size * (MAX_LEN), (sizeof(char)));
>MPI_Allgather(hostname, MAX_LEN, MPI_CHAR,
> hostname_recv_buf, MAX_
Sorry,
this is the right code.
2012/1/27 Gabriele Fatigati
> Hi Jeff,
>
> The problem is when I use strcmp on ALLGather buffer and Valgrind that
> raise a warning.
>
> Please check if the attached code is right, where size(local_hostname) is
> very small.
>
> Valg
MPI is not looking for \0's; you gave it the
> explicit length of the buffer), but if they weren't filled with \0's, then
> the receiver's printf will have problems handling it.
>
>
>
> On Jan 27, 2012, at 4:03 AM, Gabriele Fatigati wrote:
>
> > Sorry,
>
iminate the warning, you should memset hostname_recv_buf to 0 so it has a
> guaranteed value.
>
> On Jan 27, 2012, at 6:21 AM, Gabriele Fatigati wrote:
>
> Hi Jeff,
>
> yes, very stupid bug in a code, but also with the correction the problem
> with Valgrind in strcmp rema
and then alerting you later when you access
> those secondary uninitialized bytes.
>
> If I'm right, you can memset the local_hostname buffer (or use calloc),
> and then valgrind warnings will go away.
>
>
>
> On Jan 27, 2012, at 8:21 AM, Gabriele Fatigati wrote:
>
28, 2012, at 5:22 AM, Gabriele Fatigati wrote:
>
> > I had the same idea so my simple code I have already done calloc and
> memset ..
> >
> > The same warning still appear using strncmp that should exclude
> uninitialized bytes on hostnam_recv_buf :(
>
> Bummer.
>
Ok Jeff, thanks very much for your support!
Regards,
2012/1/31 Jeff Squyres
> On Jan 31, 2012, at 3:59 AM, Gabriele Fatigati wrote:
>
> > I have very interesting news. I recompiled OpenMPI 1.4.4 enabling the
> memchecker.
> >
> > Now the warning on strcmp is disap
e finished successful.
Different values of eager limit dont' solve the problem. Thanks in advance.
--
Gabriele Fatigati
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it
ocessor or less, have finished successful.
Different values of eager limit don't solve the problem. Version of OpenMPI
involved is 1.2.5. There's someone with this kind of problem over Infiniband?
Thanks in advance.
--
Gabriele Fatigati
CINECA Systems &
oduce the problem.
>
> Thanks & Regards,
> --Brad
>
> Brad Benton
> IBM
>
>
> On Tue, May 6, 2008 at 10:35 AM, Gabriele FATIGATI
> wrote:
>
> > Hi,
> > i tried to run SkaMPI 5.0.4 benchmark on IBM-BladeCenterLS21 system with
> > 256 proce
answered about a
> similar situation:
>
> http://www.open-mpi.org/community/lists/users/2008/05/5657.php
>
> See if using the pml_ob1_use_early_completion flag works for you.
>
>
>
> On Apr 30, 2008, at 7:05 AM, Gabriele FATIGATI wrote:
>
> &
A btl: parameter "btl_openib_max_eager_rdma" (current value: "16")
MCA btl: parameter "btl_openib_eager_rdma_num" (current value: "16")
MCA btl: parameter "btl_openib_min_rdma_size" (current value: "1048576")
MCA btl: parameter "btl_openib_max_rdma_size"
allocates buffer of 2097152 K initially, but it allocates small
buffer and reallocates buffer every time, with more large size. Is it
possible? If no, which is the cause of similar performance?
Another question: RDMA pipeline protocol for long messages, in OpenMPI 1.2.6
is setting by default?
2008/6
l. But, at runtime, i have ever the error above, very much time,
and the program fails, with "undefined status".
Is this an OpenMPI bug?
--
Gabriele Fatigati
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
ww
ot;q"(newval), "m"(*((volatile long*)addr)),
"a"(oldval)* //<<<<< HERE
: "memory");
return (int)ret;
}
in /opal/include/opal/sys/amd64/atomic.h, at line 89
The previous enviroment variable is GCC_BOUNDS_OPTS
Thanks in ad
/opal/class/opal_object.h: . All errors are generated by same line code:
util/sys_info.c, line 43
Final status of MPI Job is ever "Undefined".
Another bug?
2008/6/12 Gabriele Fatigati :
> I found that the error starts in this line code:
>
> static opal_
<< HERE
: "memory");
return (int)ret;
}
2008/6/13 Gabriele Fatigati :
> Maybe, i solved this bug, deleting long cast.
> Now, in compile time, it works well, but at runtime, there are other
> problems, like this:
>
> ../../../opal/class/
ur. I tried to increase various Infiniband timeout, like
btl_openib_ib_timeout, orte_abort_timeout and btl_openib_ib_min_rnr_timer
without results.
Thanks in advance.
--
Gabriele Fatigati
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di
7;t know why it would fail.
> There could be weird interactions between the OFED stack and xgcc...?
> (i.e., memory "appears" from the kernel via ibv_* function calls, etc.)
>
>
>
> On Jun 14, 2008, at 7:58 AM, Gabriele Fatigati wrote:
>
> Hi Open MPI developers,
o I don't know why it would fail.
> There could be weird interactions between the OFED stack and xgcc...?
> (i.e., memory "appears" from the kernel via ibv_* function calls, etc.)
>
>
>
> On Jun 14, 2008, at 7:58 AM, Gabriele Fatigati wrote:
>
> Hi Open MPI
Just a moment:
i didn't compile OpenMPI with bounds checking, but only my application.
Problems with OMPI compiled with bound checking remains.
2008/6/19 Jeff Squyres :
> On Jun 19, 2008, at 11:25 AM, Gabriele Fatigati wrote:
>
> Hi Jeff,
>> i solved using Gigabit ne
>does the gcc bounds checking stuff give you the possibility of saying "this
memory is ok"?
I think yes.
2008/6/19 Gabriele Fatigati :
> Just a moment:
>
> i didn't compile OpenMPI with bounds checking, but only my application.
> Problems with OMPI compil
Hi Jeff,
sorry for the delay. When i have little time, i'll check OMPI trunck with
bound checking.
When is the deliver date of 1.3 version?
2008/6/20 Jeff Squyres :
> On Jun 19, 2008, at 11:47 AM, Gabriele Fatigati wrote:
>
> i didn't compile OpenMPI with bounds ch
gt; ).
>Mas eu não consegui resolvê-lo para objetos em C++.
>
> Thank you,
> Grato,
>
> Carlos
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
-
(\___/)
> (+'.'+)
> (")_(")
> This is Bunny. Copy and paste Bunny into your
> signature to help him gain world domination!
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Gabriele Fatigati
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it Tel: +39 051 6171722
g.fatig...@cineca.it
tool or other?
And, when i restart my application, is it possible to modify the initial
number of processors?
--
Gabriele Fatigati
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it Tel: +39 051 6171722
g.f
node, not for all. I can have two
or more process of different nodes that writes concurrent in the file. Is
this dangerous or not? It's depends by file system? I'm using MPI-1 under
OpenMPI.
Thanks.
--
Gabriele Fatigati
CINECA Systems & Tecnologies Department
Supercomput
bles using MPI I/O with Open MPI.
>
> Have fun,
>george.
>
>
> On Jul 23, 2008, at 11:51 AM, Gabriele Fatigati wrote:
>
>
>> Hi,
>> i have a question about parallel i/o. In my application, actually i have
>> implemented a file lock with C system c
is ROMIO? Where can i find any informations?
Thanks a lot!
2008/7/23 Jeff Squyres :
> On Jul 23, 2008, at 6:35 AM, Gabriele Fatigati wrote:
>
> >There is a whole chapter in the MPI standard about file I/O operations.
>> I'm quite confident you will find whatever you're
ers mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Gabriele Fatigati
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it Tel: +39 051 6171722
g.fatig...@cineca.it
nable-mpi-threads
--enable-ft-thread
--with-ft=cr
-with-blcr=/prod/tools/blcr/0.7.1/gnu--4.1.2
(and other, but less important).
Where is the problem? This version is very instable?
--
Gabriele Fatigati
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casal
pment head, so all
> the normal rules and disclaimers apply (it's *generally* stable, but
> sometimes things break).
>
>
>
> On Jul 31, 2008, at 10:27 AM, Gabriele Fatigati wrote:
>
>
>> Dear OpenMPI users,
>> i have installed OpenMPI 1.4 nigthly over IBM BL
at being said, the v1.4 nightly is our normal development head, so all
> the normal rules and disclaimers apply (it's *generally* stable, but
> sometimes things break).
>
>
>
> On Jul 31, 2008, at 10:27 AM, Gabriele Fatigati wrote:
>
>
>> Dear OpenMPI users,
>>
I'm using 9005. I'll try last version. Thanks.
2008/7/31 Lenny Verkhovsky
> try to use only openib
>
> make sure you use nightly after r19092
>
> On 7/31/08, Gabriele Fatigati wrote:
>>
>> Mm, i've tried to disable shared memory but the problem rema
nt working directory
[node0316:20134] [[42404,0],0] ORTE_ERROR_LOG: Not found in file
orte-checkpoint.c at line 395
[node0316:20134] HNP with PID 20109 Not found!
I don't understand why OpenMPI doesn't find that log file.
Any idea?
Thanks in advance.
--
Gabriele Fatigati
CINECA S
t; > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
>
> --
> Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
> tmat...@gmail.com || timat...@open-mpi.org
> I'm a bright... http://www.the-brights.net/
> _
don't use MPI_Barrier. So, this error is caused by internal
MPI_Barrier into MPI_Finalize. I've noted that if i disable MPI-2 I/O
routine, application works well. Is there a strange effect of MPI_Finalize
under MPI2 related MPI_File_open, MPI_File_close, MPI_File_Reat_at routines?
--
Ga
Yes,
problem solved. There was opened file. Thanks !
2008/9/20 Tim Mattox
> This sounds like you have left a file open when using the MPI-2 I/O.
> You need to MPI_File_close() any files you have opened.
>
> On Fri, Sep 19, 2008 at 6:10 PM, Gabriele Fatigati
> wrote:
> > H
the
unique thread level supported is MPI_THREAD_SINGLE.
Whitch is the newest OpenMPI version that has full support for
MPI_THREAD_SINGLE,
MPI_THREAD_FUNNELED, MPI_THREAD_SERIALIZED and MPI_THREAD_MULTIPLE?
Thanks in advance.
--
Ing. Gabriele Fatigati
CINECA Systems & Tecnolog
type of code? What
does it means that OpenMPI doesn't support all thread levels of
MPI_INIT_THREAD?
2008/9/29 Gabriele Fatigati
> Dear OpenMPi developers,
> i've noted that OpenMPI 1.25 and 1.2.6 version doesn't supports thread
> initialization function shows below:
>
ive you more
> info) to your configure. If you plan to use threads with Open MPI I strongly
> suggest to update to the 1.3. This version is not yet released, but you can
> download the source from the nightly build section.
>
> george.
>
>
>
> On Sep 29, 2008, at 9:58 AM
? because i am getting
> errors regarding STL_map.
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Ing. Gabriele Fatigati
CINECA Systems & Tecnologies Department
d by LSF and OpenMPI. I
have launched 255 procs and there are 161 task.. very very strange.
Any idea?
Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.i
.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatigati [AT] cineca.it
.. But i'm interesting just called
rank. Is it possible?
Thanks in advance.
I'm using openmpi 1.2.2
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.it
can upgrade my OpenMPI if necessary.
Thanks.
2010/2/24 Nadia Derbey
> On Wed, 2010-02-24 at 09:55 +0100, Gabriele Fatigati wrote:
> >
> > Dear Openmpi users and developers,
> >
> > i have a question about MPI_Abort error message. I have a program
> > written in C+
Yes, of course,
but i would like to know if there is any way to do that with openmpi
2010/2/24 jody
> Hi Gabriele
> you could always pipe your output through grep
>
> my_app | grep "MPI_ABORT was invoked"
>
> jody
>
> On Wed, Feb 24, 2010 at 11:28 AM, Gabrie
u could try adding the --quiet option to your mpirun cmd line. This will
> help eliminate some (maybe not all) of the verbage.
>
>
> On Feb 24, 2010, at 6:36 AM, Jed Brown wrote:
>
> > On Wed, 24 Feb 2010 14:21:02 +0100, Gabriele Fatigati <
> g.fatig...@cineca.it> wrote
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatigati [AT] cineca.it
quot; or "
> 192.168.0.0/16,10.1.4.0/24"). Mutually exclusive with btl_tcp_if_include.
> mca:btl:tcp:param:btl_tcp_if_exclude:deprecated:no
> $
>
>
> Did your TCP BTL plugin somehow not get built / installed?
>
>
> On Apr 13, 2010, at 6:06 AM, Gabriele Fatigati wrote:
>
> > Dear OpenMPI users and
o file (and probably a .la file
> as well). If the .so is not there, then the BTL TCP plugin is not installed
> (which would be darn weird, to be honest...).
>
>
> On Apr 13, 2010, at 8:23 AM, Gabriele Fatigati wrote:
>
> > Hi Jeff,
> >
> > thaks for your reply
e OMPI
> plugins got slurped up into their respective libraries (e.g., libmpi.a).
>
> If you run ompi_info --param btl tcp, do you see anything at all? If not,
> that would indicate that the TCP BTL wasn't built. IF so, can you send your
> build logs/etc.? (please compress!)
&
s/openmpi/1.3.3/intel--11.1--binary/etc/openmpi-mca-params.conf])
My actual configuration is:
btl = ^tcp
btl_tcp_if_exclude = eth0,ib0,ib1
oob_tcp_include = eth1,lo
But is it right? I have some doubt..
2010/4/13 Jeff Squyres
> On Apr 13, 2010, at 9:03 AM, Gabriele Fatigati wrote:
>
> &
Ok Jeff,
i have understood. Thanks very much for your help!
Regards.
2010/4/13 Jeff Squyres
> On Apr 13, 2010, at 9:17 AM, Gabriele Fatigati wrote:
>
> > My actual configuration is:
> >
> > btl = ^tcp
> > btl_tcp_if_exclude = eth0,ib0,ib1
> > oob_tcp_inclu
'
> waiting. Is there a way to do this?
>
> Regards,
> Gijsbert
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Sy
y.
2010/5/11 Gijsbert Wiesenekker
>
> On May 11, 2010, at 9:29 , Gabriele Fatigati wrote:
>
> Dear Gijsbert,
>
>
> >Ideally I would like to check how many MPI_Isend messages have not been
> processed yet, so that I can stop >sending messages if there are 't
03ib0 exited
on signal 11 (Segmentation fault).
--
The same using other Bcast algorithm. Disabling dynamic rules, it works
well. Maybe i'm using some wrong parameter setup?
Thanks in advance.
--
Ing. Gabriele Fatigati
;"
ofa-v2-ipath0-2 u2.0 nonthreadsafe default libdaploscm.so.2 dapl.2.0 "ipath0
2" ""
ofa-v2-ehca0-1 u2.0 nonthreadsafe default libdaploscm.so.2 dapl.2.0 "ehca0
1" ""
ofa-v2-iwarp u2.0 nonthreadsafe default libdaplofa.so.2 dapl.2.0 "eth2 0" ""
it works only if i use 1.2 interface, not with 2.0 version.
Thanks in advance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatigati [AT] cineca.it
ynamic_decision = 1; \
> EXECUTE; \
>
>
>
>
>
> On Jul 4, 2010, at 8:12 AM, Gabriele Fatigati wrote:
>
> > Dear OpenMPI user,
> >
> > i'm trying to use collective dynamic rules with OpenM
dvance.
--
Ing. Gabriele Fatigati
Parallel programmer
CINECA Systems & Tecnologies Department
Supercomputing Group
Via Magnanelli 6/3, Casalecchio di Reno (BO) Italy
www.cineca.itTel: +39 051 6171722
g.fatigati [AT] cineca.it
1 - 100 of 139 matches
Mail list logo