Neil,
Open-MPI support all thread models defined by the MPI standard if they are
available on the target system. Few years ago I did some work with MPI and
OpenMP and locking mechanism of existing MPI implementations was the
performance killer. We have work hard to remove this bottleneck on
O
On May 5, 2005, at 7:58 AM, atarpley wrote:
1) When will the final Open MPI be released (non development)?
As soon as everybody is happy with the stability and the features of
a version. And like for most of the HPC software, SC05 seems like a
reasonable deadline. Meanwhile, a beta vers
Joel,
I took a look at your code and found the error. Basically, it's just
a datatype problem. The datatype as described in your program does
not correspond to the one you expect to see in practice. Actually you
forget to set the correct extent.
Let me show you the problem. Let's suppose
Read the section about datatypes in order to figure out how to describe
the memory layout that you want to move between processors.
george.
On Mon, 22 Aug 2005, Raul Mosquera wrote:
HI.
I just started working with MPI's.
I'm been reading some documentation about MPI's but I
have not found
First of all with your approach you're not sending all the structure.
You will just send the num_Rocs and the elements, missing the
num_Cols member off the structure. The problem come from the fact
that lena[0] is set to 1 not 2.
Another problem (which I don't think it can happens with this
First of all with your approach you're not sending all the structure.
You will just send the num_Rocs and the elements, missing the
num_Cols member off the structure. The problem come from the fact
that lena[0] is set to 1 not 2.
Another problem (which I don't think it can happens with this
The configure is complaining about the missing atomic directives for your
processor. We have the MIPS atomic calls but not the MIPS64. We just have
to add them in the opal/asm/base. I dont have access to a MIPS64 node.
Maybe Brian ?
Thanks,
george.
On Fri, 9 Sep 2005, Jonathan Day wrote:
Ken,
Please apply the following patch (from your /home/mighell/pkg/ompi/
openmpi-1.0rc4/ base directory).
Index: opal/runtime/opal_init.c
===
--- opal/runtime/opal_init.c(revision 7831)
+++ opal/runtime/opal_init.c(working
Mike,
If your nodes have more than one network interface it can happens
that we do not select the right one. There is a simple way to insure
that this does not happens. Create a directory named .openmpi in your
home area. In this directory edit the file mca-params.conf. This file
is loade
We did discover this problem on MAC OS X yesterday. There was a patch
on the trunk but I don't think it get into the stable branch. It will
be in the tarball by tomorrow.
Sorry about that,
george.
On Nov 8, 2005, at 1:17 PM, Charles Williams wrote:
Hi Brian,
Thanks for working on th
Allan,
If there are 2 Ethernet cards it's better if you can point to the one you
want to use. For that you can modify the .openmpi/mca-params.conf file in
your home directory. All of the options can go in this file so you will
not have to specify them on the mpirun command every time.
I give
The MX library call exit if the error handler is not set before the
initialization. This error get fixed, it will get into the tarball
shortly.
Meanwhile you can use the btl_base_exclude=mx,gm in order to force them to
be skipped.
Thanks,
george.
On Thu, 17 Nov 2005, Troy Telford wrot
Pierre,
The problem seems to come from the fact that we do not detect how to
generate the assembly code for our atomic operations. As a result we fall
back on the gcc mode for 32 bits architectures.
Here is the corresponding output from the configure script:
checking if cc supports GCC inline as
Carsten,
In the Open MPI source code directory there is a collective component
called tuned (ompi/mca/coll/tuned). This component is not enabled by
default right now, but usually it give better performances than the
basic one. You should give it a try (go inside and remove
the .ompi_ignor
On Dec 20, 2005, at 3:19 AM, Carsten Kutzner wrote:
I don't see how you deduct that adding barriers increase the
congestion ? It increase the latency for the all-to-all but for me
When I do an all-to-all a lot of times, I see that the time for a
single
all-to-all varies a lot. My time meas
Please remove the lines 109 to 111 from the ompi/mca/btl/mx/
btl_mx_component.c file. They get in by accident, they belong to some
other patch from the version 1.0.2 of Open MPI. It was not supposed
to get back-ported to the 1.0.1.
Sorry about the inconvenience,
george.
On Jan 27, 200
Glen,
Thanks for the spending time benchmarking OpenMPI and for sending us the
feedback. We know we have some issues on the 1.0.2 version, more precisely
with the collective communications. We just look inside the CMAQ code, and
there are a lot of reduce and Allreduce. As it look like the collecti
On Feb 8, 2006, at 7:06 PM, Jean-Christophe Hugly wrote:
But should I understand from all this that the "direct" mode will
never
actually work ? It seems that if you need at least two transports,
then
none of them can be the hardwired unique one, right ? Unless there's a
built-in switch bet
There are 2 things that have to be done in order to be able to run a
Open MPI application. First the runtime environment need access to
some of the files in the bin directory so you have to add the Open
MPI bin directory to your path. And second, as we use shared
libraries the OS need to kn
Yvan,
I'm looking into this one. So far I cannot reproduce it with the
current version from the trunk. I will look into the stable versions.
Until I figure out what's wrong, can you please use the nightly
builds to run your test. Once the problem get fixed it will be
included in the 1.0.2
Konstantin,
The all2all scheduling works only because we know they will all send
the same amount of data, so the communications will take "nearly" the
same time. Therefore, we can predict how to schedule the
communications to get the best out of the network. But this approach
can lead to
est stable.
Thanks,
george.
On Fri, 10 Feb 2006, George Bosilca wrote:
Yvan,
I'm looking into this one. So far I cannot reproduce it with the
current version from the trunk. I will look into the stable versions.
Until I figure out what's wrong, can you please use the nightly
builds to run
James,
I not 100% sure but I think I might know what's wrong. I can reproduce
something similar (oddly it does not happens all the time) if I activate
my firewall and let all the trafic through (ie. accept all connections).
In few words, I think the firewall (even when disabled) introduce some
Tennessee
Program co-chairs:
Yutaka Ishikawa, The University of Tokyo
Atsushi Hori, Riken AICS
Workshop chair:
Yuichi Tsujita, Riken AICS
Program Committee:
Ahmad Afsahi, Queen's University
Pavan Balaji, Argonne National Laboratory
Siegfried Benkner, University of Vienna
Gil Bloch, Mellanox Tech
Ross,
I’m not familiar with the R implementation you are using, but bear with me and
I will explain how you can all Open MPI about the list of all pending requests
on a process. Disclosure: This is Open MPI deep voodoo, an extreme way to debug
applications that might save you quite some time.
Muhammad,
Our configuration of TCP is tailored for 1Gbs networks, so it’s performance on
10G might be sub-optimal. That being said, the remaining of this email will be
speculation as I do not have access to a 10G system to test it.
There are two things that I would test to see if I can improve
helped in achieving expected network bandwidth. Varying send and recv buffer
> sizes from 128 KB to 1 MB added just 50 Mbps with maximum bandwidth achieved
> on 1 MB buffer size.
> Thanks for support.
>
>
> On Thu, Apr 17, 2014 at 6:05 AM, George Bosilca wrote:
> Muhammad,
&g
de de Engenharia do Porto, Portugal
Olivier Beaumont, INRIA, France
Paolo Bientinesi, RWTH Aachen, Germany
Cristina Boeres, Universidade Federal Fluminense, Brasil
George Bosilca, University of Tennessee, USA
Louis-Claude Canon, Université de Franche-Comté, France
Alexandre Denis, Inria, France
Spenser,
There are several issues with the code you provided.
1. You are using a 1D process grid to create a 2D block cyclic distribution.
That’s just not possible.
2. You forgot to take in account the extent of the datatype. By default the
extent of a vector type is starting from the first by
Spenser,
Do you mind posting your working example here on the mailing list?
This might help future users understanding how to correctly use the
MPI datatype.
Thanks,
George.
On Wed, May 7, 2014 at 3:16 PM, Spenser Gilliland
wrote:
> George,
>
> Thanks for taking the time to respond to my qu
I think the issue is with the way you define the send and receive
buffer in the MPI_Alltoall. You have to keep in mind that the
all-to-all pattern will overwrite the entire data in the receive
buffer. Thus, starting from a relative displacement in the data (in
this case matrix[wrank*wrows]), begs f
The segfault indicates that you overwrite outside of the allocated memory (and
conflicts with the ptmalloc library). I’m quite certain that you write outside
the allocated array …
George.
On May 8, 2014, at 15:16 , Spenser Gilliland wrote:
> George & Mattheiu,
>
>> The Alltoall should only
Spenser,
Here is basically what is happening. On the top left, I depicted the datatype
resulting from the vector type. The two arrows point to the lower bound and
upper bound (thus the extent) of the datatype. On the top right, the resized
datatype, where the ub is now moved 2 elements after th
I read the MPICH trac ticket you pointed to and your analysis seems pertinent.
The impact of my patch for “count = 0” has a similar outcome to yours: removed
all references to the datatype if the count was zero, without looking fo the
special markers.
Let me try to come up with a fix.
Thanks,
The alltoall exchanges data from all nodes to all nodes, including the
local participant. So every participant will write the same amount of
data.
George.
On Thu, May 8, 2014 at 6:16 PM, Spenser Gilliland
wrote:
> George,
>
>> Here is basically what is happening. On the top left, I depicted t
This is more subtle that described here. It's a vectorization problem
and frankly it should appear on all loop-based string operations and
for most compilers (confirmed with gcc, clang and icc). The proposed
patch is merely a band-aid ...
More info @ https://bugs.launchpad.net/ubuntu/+source/valgr
Alan,
I think we forgot to cleanup after a merge and as a result we have
c_destweights and c_sourceweights defined twice. Please try the
following patch and let us know if this fixes your issue.
Index: ompi/mpi/fortran/mpif-h/dist_graph_create_adjacent_f.c
e this is a more pervasive problem.
>
> Let us look at this a bit more...
>
>
> On Jun 5, 2014, at 10:37 AM, George Bosilca wrote:
>
>> Alan,
>>
>> I think we forgot to cleanup after a merge and as a result we have
>> c_destweights and c_sourceweights defi
The One-Sided Communications from the Chapter 11 of the MPI standard?
For processes on the same node you might want to look at
MPI_WIN_ALLOCATE_SHARED.
George.
On Fri, Jun 27, 2014 at 9:53 AM, Brock Palen wrote:
> Is there a way to import/map memory from a process (data acquisition) such
> th
George.
On Fri, Jun 27, 2014 at 10:30 AM, Brock Palen wrote:
> But this is within the same MPI "universe" right?
>
> Brock Palen
> www.umich.edu/~brockp
> CAEN Advanced Computing
> XSEDE Campus Champion
> bro...@umich.edu
> (734)936-1985
>
>
>
> On Ju
EuroMPI/ASIA 2014 Call for participation
EuroMPI/ASIA 2014 in-cooperation status with ACM and SIGHPC in Kyoto,
Japan, 9th - 12th September, 2014.
The prime annual meeting for researchers, developers, and students in
message-passing parallel computing with MPI and related paradigms.
Deadline
Why are you using system() the second time ? As you want to spawn an MPI
application calling MPI_Call_spawn would make everything simpler.
George
On Jul 3, 2014 4:34 PM, "Milan Hodoscek" wrote:
>
> Hi,
>
> I am trying to run the following setup in fortran without much
> success:
>
> I have an MP
I have an extremely vague recollection about a similar issue in the
datatype engine: on the SPARC architecture the 64 bits integers must be
aligned on a 64bits boundary or you get a bus error.
Takahiro you can confirm this by printing the value of data when signal is
raised.
George.
On Fri, Au
The patch related to ticket #4597 is zapping only the datatypes where the
user explicitly provided a zero count.
We can argue about LB and UB, but I have a hard time understanding the
rationale of allowing zero count only for LB and UB. If it is required by
the standard we can easily support it (t
On Mon, Aug 11, 2014 at 10:41 AM, Rob Latham wrote:
>
>
> On 08/11/2014 08:54 AM, George Bosilca wrote:
>
>> The patch related to ticket #4597 is zapping only the datatypes where
>> the user explicitly provided a zero count.
>>
>> We can argue abo
You have a physical constraint, the capacity of your links. If you are over 90%
of your network bandwidth, there is little to be improved.
George.
On Aug 27, 2014, at 0:18, "Zhang,Lei(Ecom)" wrote:
>> I'm not sure what you mean by this statement. If you add N asynchronous
>> requests and the
Based on the MPI standard (MPI 3.0 section 10.5.4 page 399) there is no
need to disconnect the child processes from the parent in order to cleanly
finalize. From this perspective, the original example is correct, but
sub-optimal as the parent processes calling MPI_Finalize might block until
all con
Look at your ifconfig output and select the Ethernet device (instead of the
IPoIB one). Traditionally the name lack any fanciness, most distributions
using eth0 as a default.
George.
On Tue, Sep 9, 2014 at 11:24 PM, Muhammad Ansar Javed <
muhammad.an...@seecs.edu.pk> wrote:
> Hi,
>
> I am cur
889 errors:0 dropped:0 overruns:0 frame:0
> TX packets:66889 errors:0 dropped:0 overruns:0 carrier:0
> collisions:0 txqueuelen:0
> RX bytes:19005445 (18.1 MiB) TX bytes:19005445 (18.1 MiB)
>
>
>
>
>
>
>> Date: Wed, 10 Sep 2014 0
1. It is extremely unlike to have a broken MPI communication pipe. Use a
parallel debugger to validate your communication pattern is correct. I
would suspect a deadlock due to an incomplete communication pattern more
than a broken communication pipe.
2. Nope, you cant set timeouts on MPI calls. Th
Diego,
I strongly suggest a careful reading of the Datatype chapter in the MPI 3.0
standard. More precisely, the example 4.18 might be of particular interest in
your case, as it explains everything that you need to do in order to obtain a
portable datatype, one that works in all cases independe
Using MPI_ANY_SOURCE will extract one message from the queue of unexpected
messages. The fairness is not guaranteed by the MPI standard, thus it is
impossible to predict the order between servers.
If you need fairness your second choice is the way to go.
George.
> On Nov 10, 2014, at 20:14
Daniel,
Many papers have been published about the performance modeling of different
collective communications algorithms (and fortunately these models are
implementation independent). I can pinpoint you to our research in
collective modeling which is the underlying infrastructure behind the
decisi
Dave,
You’re right, we screwed up (some #define not correctly set). I have a patch,
I’ll push it asap.
George.
> On Nov 19, 2014, at 05:19 , Dave Love wrote:
>
> "Daniels, Marcus G" writes:
>
>> On Mon, 2014-11-17 at 17:31 +, Dave Love wrote:
>>> I discovered from looking at the mpiP
I would argue this is a typical user level bug.
The major difference between the dist_create and dist_create_adjacent is
that in the later each process provides its neighbors in an order that is
expected (and that match the info provided to the MPI_Neighbor_alltoallw
call. When the topology is cre
https://github.com/open-mpi/ompi/pull/285 is a potential answer. I would
like to hear Dave Goodell comment on this before pushing it upstream.
George.
On Wed, Nov 19, 2014 at 12:56 PM, George Bosilca
wrote:
> Dave,
>
> You’re right, we screwed up (some #define not correctly set).
> On Nov 25, 2014, at 01:12 , Gilles Gouaillardet
> wrote:
>
> Bottom line, though Open MPI implementation of MPI_Dist_graph_create is not
> deterministic, it is compliant with the MPI standard.
> /* not to mention this is not the right place to argue what the standard
> could or should have b
The same functionality can be trivially achieved at the user level using
Adam's approach. If we provide a shortcut in Open MPI, we should emphasize
this is an MPI extension, and offer the opportunity to other MPI to provide
a compatible support
Thus, I would name all new types MPIX_ instead of OMP
; >
> > int main (int argc, char *argv[]) {
> > int i;
> > double t = 0;
> > MPI_Init(&argc, &argv);
> > for (;;) {
> > double _t = MPI_Wtime();
> > if (_t < t) {
> > fprintf(stderr, "going back in time %l
> On Dec 2, 2014, at 00:37 , Jeff Squyres (jsquyres) wrote:
>
> On Nov 28, 2014, at 11:58 AM, George Bosilca wrote:
>
>> The same functionality can be trivially achieved at the user level using
>> Adam's approach. If we provide a shortcut in Open MPI, we should
On Tue, Dec 2, 2014 at 9:56 AM, Jeff Squyres (jsquyres)
wrote:
> I like the OMPI_ prefix because it clearly identifies the function as
> specific to Open MPI (i.e., you really should enclose it in #if
> defined(OPEN_MPI) / #endif).
>
That's not enough. They will have to check for the right versi
You have to call MPI_Comm_disconnect on both sides of the intercommunicator. On
the spawner processes you should call it on the intercom, while on the spawnees
you should call it on the MPI_Comm_get_parent.
George.
> On Dec 12, 2014, at 20:43 , Alex A. Schmidt wrote:
>
> Gilles,
>
> MPI_co
d a submit parameter to make the command block until
>> the job completes. Or you can write your own wrapper.
>> Or you can retrieve the jobid and qstat periodically to get the job state.
>> If an api is available, this is also an option.
>>
>> Cheers,
>>
>> G
or ?
>>
>> I also read the man page again, and MPI_Comm_disconnect does not ensure
>> the remote processes have finished or called MPI_Comm_disconnect, so that
>> might not be the thing you need.
>> George, can you please comment on that ?
>>
>> Cheers,
>&g
t;>
>>>>> On Mon, Dec 15, 2014 at 9:27 AM, Alex A. Schmidt wrote:
>>>>>
>>>>>> George,
>>>>>>
>>>>>> Thanks for the tip. In fact, calling mpi_comm_spawn right away with
>>>>>> MPI_COMM_
On Wed, Dec 17, 2014 at 7:29 PM, Jeff Squyres (jsquyres) wrote:
> Returning to a super-old thread that was never finished...
>
>
> On Dec 2, 2014, at 6:49 PM, George Bosilca wrote:
>
> > That's not enough. They will have to check for the right version of Open
> MPI
Ben,
I can't find anything in the MPI standard suggesting that a recursive
behavior of the attribute deletion is enforced/supported by the MPI
standard. Thus, the current behavior of Open MPI (a single lock for all
attributes), while maybe a little strict, is standard compliant (and thus
correct).
On Thu, Dec 18, 2014 at 2:27 PM, Jeff Squyres (jsquyres) wrote:
> On Dec 17, 2014, at 9:52 PM, George Bosilca wrote:
>
> >> I don't understand how MPIX_ is better.
> >>
> >> Given that there is *zero* commonality between any MPI extension
> implemented
interest/cycles in taking it over and advancing it with the Forum.
>
> Two additional points from the PDF listed above:
>
> - on slide 21, it was decided to no allow the recursive behavior (i.e.,
> you can ignore the "This is under debate" bullet.
> - the "destroy&q
Diego,
Non-blocking communications only indicate a communication will happen, it
does not force them to happen. They will only complete on the corresponding
MPI_Wait, which also marks the moment starting from where the data can be
safely altered or accessed (in the case of the MPI_Irecv). Thus
dea
Diego,
Please find below the corrected example. There were several issues but the
most important one, which is certainly the cause of the segfault, is that
"real(dp)" (with dp = selected_real_kind(p=16)) is NOT equal to
MPI_DOUBLE_RECISION. For double precision you should use 15 (and not 16).
G
s -- put them all
> in a single array).
>
> Look at examples 3.8 and 3.9 in the MPI-3.0 document.
>
>
>
> On Jan 8, 2015, at 5:15 PM, George Bosilca wrote:
>
> > Diego,
> >
> > Non-blocking communications only indicate a communication will happen,
> it
Or use MPI_Type_match_size to find the right type.
George.
> On Jan 8, 2015, at 19:05 , Gus Correa wrote:
>
> Hi Diego
>
> *EITHER*
> declare your QQ and PR (?) structure components as DOUBLE PRECISION
> *OR*
> keep them REAL(dp) but *fix* your "dp&q
I totally agree with Dave here. Moreover, based on the logic exposed by
Jeff, there is no right solution because if one choose to first wait on the
receive requests this also leads to a deadlock as the send requests might
not be progressed.
As a side note, posting the receive requests first minim
> On Jan 15, 2015, at 06:02 , Diego Avesani wrote:
>
> Dear Gus, Dear all,
> Thanks a lot.
> MPI_Type_Struct works well for the first part of my problem, so I am very
> happy to be able to use it.
>
> Regarding MPI_TYPE_VECTOR.
>
> I have studied it and for simple case it is clear to me what
orge, dear Gus, dear all,
>> Could you please tell me where I can find a good example?
>> I am sorry but I can not understand the 3D array.
>>
>>
>> Really Thanks
>>
>> Diego
>>
>>
>> On 15 January 2015 at 20:13, George Bosilca >
>
> I would like to have a sort *MPI_my_TYPE to do that (like *
> *MPI_my_STRUCT**) *and not to create every time *MPI_TYPE_VECTOR *because
> *DATASEND_REAL *changes size every time.
>
> I hope to make myself clear.
>
> So is it correct to use *MPI_TYPE_VECTOR?, *Can I do
emulate Fortran90 and C
> structures, as Gus' suggestion.
>
> Let's me look to that tutorial
> What do you think?
>
> Thanks again
>
>
>
>
>
>
> Diego
>
>
> On 16 January 2015 at 16:02, George Bosilca <mailto:bosi...@icl.utk.edu>> wro
ng in by then, writing a new book
might be worth the effort.
George.
>
> Thank you,
> Gus Correa
> (Hijacking Diego Avesani's thread, apologies to Diego.)
> (Also, I know this question is not about Open MPI, but about MPI in general.
> But the lack of examples may
t;
>> Maybe after the release of MPI 4.0 would be a good target …
>
> Not any sooner than that?
> MPI-2 is already poorly covered in the literature,
> MPI-3 only by the standard (yawn ...).
> And when MPI 4 comes, would we have to wait for MPI-5 to get
> the examples?
>
Use mpirun --mca btl_tcp_if_exclude eth0 should fix your problem. Otherwise
you can add it to your configuration file. Everything is extensively
described in the FAQ.
George.
On Jan 26, 2015 12:11 PM, "Kris Kersten" wrote:
> I'm working on an ethernet cluster that uses virtual eth0:* interfaces
Sachin,
I cant replicate your issue neither with the latest 1.8 nor with the trunk.
I tried using a single host, while forcing SM and then TP to no avail.
Can you try restricting the collective modules in use (adding --mca coll
tuned,basic) to your mpirun command?
George.
On Fri, Feb 20, 201
m = 12000
>> >rank 0, m = 12000
>> >rank 1, m = 13000
>> >rank 0, m = 13000
>> >rank 2, m = 13000
>> >rank 1, m = 14000
>> >rank 2, m = 14000
>> >rank 0, m = 14000
>> >rank 1, m = 15000
>> >
> On Feb 23, 2015, at 10:20 , Harald Servat wrote:
>
> Hello list,
>
> we have several questions regarding calls to collectives using
> intercommunicators. In man for MPI_Bcast, there is a notice for the
> inter-communicator case that reads the text below our questions.
>
> If an I is an i
Bogdan,
As far as I can tell your code is correct, and the problem is coming from
Open MPI. More specifically, I used alloca in the optimization stage in
MPI_Type_commit, and as your arrays of length were too large, alloca failed
and lead to a segfault. I fixed in the trunk (3c489ea), and this wil
f Technical Sciences, Novi Sad, Serbia
>
> On Thu, Mar 5, 2015 at 6:31 PM, George Bosilca
> wrote:
>
>> Bogdan,
>>
>> As far as I can tell your code is correct, and the problem is coming from
>> Open MPI. More specifically, I used alloca in the optimization s
>
> Bogdan Sataric
>
> email: bogdan.sata...@gmail.com
> phone: +381 21-485-2441
>
> Teaching & Research Assistant
> Chair for Applied Computer Science
> Faculty of Technical Sciences, Novi Sad, Serbia
>
> On Fri, Mar 6, 2015 at 12:52 AM, George Bosilca
> wrote:
&g
Khalid,
The decision is rechecked every time we create a new communicator. So, you
might create a solution that force the algorithm to whatever you think it is
best (using the environment variables you mentioned), then create a
communicator, and free it once you’re done.
I have no idea what yo
Thomas,
IWhat exactly is 'local_tlr_lookup(1)%wtlr'?
I think the problem is that your MPI derived datatype use the pointer to
the allocatable arrays instead of using the pointer to the first element of
these arrays. As an example instead of doing
call mpi_get_address(local_tlr_lookup(1)%wtlr,
s issue of derived data types,
> though. Perhaps that'll help.
>
> Best,
> Thomas
>
> On Sun, Mar 15, 2015 at 9:00 PM George Bosilca <mailto:bosi...@icl.utk.edu>> wrote:
> Thomas,
>
> IWhat exactly is 'local_tlr_lookup(1)%wtlr'?
>
> I thi
16d9f71d01cc should provide a fix for this issue.
George.
On Sat, May 21, 2016 at 12:08 PM, Akihiro Tabuchi <
tabu...@hpcs.cs.tsukuba.ac.jp> wrote:
> Hi Gilles,
>
> Thanks for your quick response and patch.
>
> After applying the patch to 1.10.2, the test code and our program which
> uses nes
Apparently Solaris 10 lacks support for strnlen. We should add it to our
configure and provide a replacement where needed.
George.
On Wed, Jun 8, 2016 at 4:30 PM, Siegmar Gross <
siegmar.gr...@informatik.hs-fulda.de> wrote:
> Hi,
>
> I have built openmpi-dev-4221-gb707d13 on my machines (Solar
On Jul 8, 2016 3:16 PM, "Juan Francisco Martínez" <
juan.francisco.marti...@est.fib.upc.edu> wrote:
>
> Hi everybody!
>
> First of all I want to congratulate all of you because the quality of
> the community, I have solved a lot of doubts just reading the mailing
> list.
>
> However I have a questi
Not that I know of. Ompi_info should be good enough no?
George
>
> - Fran
>
> On Fri, 2016-07-08 at 15:40 +0200, George Bosilca wrote:
> >
> > On Jul 8, 2016 3:16 PM, "Juan Francisco Martínez" <
> > juan.francisco.marti...@est.fib.upc.edu> wrote:
> &
Steve,
Some of these interferences are due to the design choices in Open MPI, but
others are due to the constraints imposed by MPI. As an example, MPI
requires FIFO ordering on message delivery on each communication channel
(communicator/peer). If you inject messages from multiple threads for the
This function is not available on OS X, and the corresponding OMPI module
shouldn't have been compiled. How did you configure you OMPI install ?
George.
On Mon, Oct 3, 2016 at 10:09 AM, Christophe Peyret <
christophe.pey...@onera.fr> wrote:
> Hello,
>
> since Xcode8 update, I have problem pro
nfirms that.
>
> ompi-release git:(v1.10) ✗ find . -name '*.[ch]' | xargs grep
> clock_gettime
>
> ompi-release git:(v1.10) ✗
>
> -Nathan
>
> On Oct 03, 2016, at 10:50 AM, George Bosilca wrote:
>
> This function is not available on OS X, and the corresponding O
George,
There is too much information missing from your example. If I try to run
the code on the top assuming the process is is_host(NC.node), I have
on NC.commd
3 communications (ignore the others):
rc = MPI_Send(&ival, 1, MPI_INT, NC.dmsgid, SHUTDOWN_ANDMSG, NC.commd);
MPI_Recv(&ival, 1, MPI_IN
Rick,
Let's assume that you have started 2 processes, and that your sensorList is
{1}. The worldgroup will then be {P0, P1}, which trimmed via the sensorList
will give the sensorgroup {MPI_GROUP_EMPTY} on P0 and the sensorgroup {P1}
on P1. As a result on P0 you will create a MPI_COMM_NULL communic
Vahid,
You cannot use Fortan's vector subscript with MPI. Are you certain that
the arrays used in your bcast are contiguous ? If not you would either need
to move the data first into a single dimension array (which will then have
the elements contiguously in memory), or define specialized datatyp
1 - 100 of 741 matches
Mail list logo