dear all users,
I am new in MPI world.
I would like to know what is the best choice and meaning between different
function.
In my program I would like that each process send a vector of data to all
the other process. What do you suggest?
Is it correct MPI_Bcast or I am missing something?
Thanks a
Hello,
The rationale is to read the message and do what it says :)
Have a look at
www.open-mpi.org/projects/hwloc/doc/v1.10.0/a00028.php#faq_os_error
Try upgrading your BIOS and kernel.
Otherwise install hwloc and send the output (tarball) of
hwloc-gather-topology to hwloc-users (not to OMPI
Dear all, when trying to run NWchem with openmpi, I get this error.
* Hwloc has encountered what looks like an error from the operating system.
*
* object intersection without inclusion!
* Error occurred in topo
On Dec 19, 2014, at 10:44 AM, George Bosilca wrote:
> Regarding your second point, while I do tend to agree that such issue is
> better addressed in the MPI Forum, the last attempt to fix this was certainly
> not a resounding success.
Yeah, fair enough -- but it wasn't a failure, either. It c
On Fri, Dec 19, 2014 at 8:58 AM, Jeff Squyres (jsquyres) wrote:
> George:
>
> (I'm not a member of petsc-maint; I have no idea whether my mail will
> actually go through to that list)
>
> TL;DR: I do not think that George's change was correct. PETSC is relying
> on undefined behavior in the MPI s
On Dec 19, 2014, at 8:58 AM, Jeff Squyres (jsquyres) wrote:
> More specifically, George's change can lead to inconsistency/incorrectness in
> the presence of multiple threads simultaneously executing attribute actions
> on a single entity.
Actually -- it's worse than I first thought. This cha
George:
(I'm not a member of petsc-maint; I have no idea whether my mail will actually
go through to that list)
TL;DR: I do not think that George's change was correct. PETSC is relying on
undefined behavior in the MPI standard and should probably update to use a
different scheme.
More detail:
On Dec 19, 2014, at 2:48 AM, George Bosilca wrote:
> We made little progress over the last couple of [extremely long] emails and
> the original topic diverged and got diluted. Lets hold on our discussion here
> and let Nick, Keita and the others go ahead and complete their work. We can
> fiddl
I have been following this being very interested, I will create a PR for my
branch then.
To be clear, I already did the OMPI change before this discussion came up,
so this will be the one, however the change to other naming schemes is easy.
2014-12-19 7:48 GMT+00:00 George Bosilca :
>
> On Thu, D
On Thu, Dec 18, 2014 at 2:27 PM, Jeff Squyres (jsquyres) wrote:
> On Dec 17, 2014, at 9:52 PM, George Bosilca wrote:
>
> >> I don't understand how MPIX_ is better.
> >>
> >> Given that there is *zero* commonality between any MPI extension
> implemented between MPI implementations, how exactly is
10 matches
Mail list logo