I pulled fresh copy of the dev trunk, and tried building. It did not
change anything - I am still getting the same error:
../../../ompi/.libs/libmpi.so: undefined reference to
`opal_atomic_swap_64'
GNU version still builds fine.
On Tue, Mar 6, 2012 at 5:38 AM, Jeffrey Squyres wrote:
> I disable
Yes, I can, and I did use gfortran, and that fixed the problem. No need to
edit mpif-config.h. Moreover the program of interest compiled quite happily.
Thank you very much indeed. Did not realize that g95 was an outdated package.
I can only think that I once had a problem with gfortran and
I'm afraid that I have neither a Cray nor the PGI compiler, so I'm not in a
good position to help here.
Would someone with the PGI compiler give the trunk a whirl to see if disabling
the CXX inline assembly for PGI broke something? I'd be a little surprised,
since we already had it disabled fo
Hi all,
I am scratching my head over what I think should be a relatively simple group
communicator operation. I am hoping some kind person can put me out of my
misery and figure out what I'm doing wrong.
Basically, I am trying to scatter a set of values to a subset of process ranks
(hence the
Isn't it because you're calling MPI_Scatter() even on rank 2 which is not
part of your new_comm?
Regards,
Nadia
users-boun...@open-mpi.org wrote on 03/06/2012 01:52:06 PM:
> De : Timothy Stitt
> A : "us...@open-mpi.org"
> Date : 03/06/2012 01:52 PM
> Objet : [OMPI users] Scatter+Group Communi
Hi Nadia,
Thanks for the reply. This is were my confusion with the scatter command comes
in. I was really hoping that MPI_Scatter would automagically ignore the ranks
that are not part of the group communicator, since this test code is part of
something more complicated were many sub-communicat
Tim,
Since MPI_Comm_create sets the created communicator to MPI_COMM_NULL for
the processes that are not in the group , may be preceding your
collectives by a:
if (MPI_COMM_NULL != new_comm) {
}
could be enough.
But may be I'm wrong: I'll let the specialists answer.
Regards,
Nadia
--
Na
Will definitely try that. Thanks for the suggestion.
Basically, I need to be able to scatter values from a sender to a subset of
ranks (as I scale my production code, I don't want to use MPI_COMM_WORLD, as
the receiver list will be quite small) without the receivers knowing if they
are to recei
Could someone confirm whether this is a bug or misunderstanding the doc
(in which case it's not just me, and it needs clarifying!)? I haven't
looked at the current code in the hope of a quick authoritative answer.
This is with 1.5.5rc3, originally on Interlagos, but also checked on
Magny Cours.
On Tue, Mar 6, 2012 at 7:28 AM, Dave Love wrote:
> Could someone confirm whether this is a bug or misunderstanding the doc
> (in which case it's not just me, and it needs clarifying!)? I haven't
> looked at the current code in the hope of a quick authoritative answer.
>
> This is with 1.5.5rc3,
Hi Timothy
There is no call to MPI_Finalize in the program.
Would this be the problem?
I hope this helps,
Gus Correa
On Mar 6, 2012, at 10:19 AM, Timothy Stitt wrote:
> Will definitely try that. Thanks for the suggestion.
>
> Basically, I need to be able to scatter values from a sender to a s
Dear Gus,
That was a transcription error on my part to email. The Finalize is in the
actual code I used.
Thanks,
Tim.
On Mar 6, 2012, at 11:43 AM, Gustavo Correa wrote:
Hi Timothy
There is no call to MPI_Finalize in the program.
Would this be the problem?
I hope this helps,
Gus Correa
On
Ralph Castain writes:
> Well, no - it shouldn't do that, so I would expect it is a bug. Will try to
> look at it, but probably won't happen until next week due to travel.
OK, thanks. I'll raise an issue and take a look, as we need it working
soon.
I think the problem is that it is trying to inline opal_atomic_swap_64 (when it
shouldn't with PGI). I am working on an improved mpool/rcache (w/o kernel
assistance) solution at the moment but when I am done I can take a look.
-Nathan
On Tue, 6 Mar 2012, Jeffrey Squyres wrote:
I'm afraid tha
Hi
I am working on a 3D ADI solver for the heat equation. I have implemented it as
serial. Would anybody be able to indicate the best and more straightforward way
to parallelise it. Apologies if this is going to the wrong forum.
thanks
Sanjay
On Mar 6, 2012, at 3:59 PM, Kharche, Sanjay wrote:
>
> Hi
>
> I am working on a 3D ADI solver for the heat equation. I have implemented it
> as serial. Would anybody be able to indicate the best and more
> straightforward way to parallelise it. Apologies if this is going to the
> wrong forum
Parallelize in distributed-memory fashion or is multi-threaded good
enough? Anyhow, you should be able to find many resources with an
Internet search. This particular mailing list is more for users of
OMPI, a particular MPI implementation. One approach would be to
distribute only one axis, s
Hi all,
Can anyone tell me whether it is legal to pass zero values for some of the send
count elements in an MPI_AlltoallV() call? I want to perform an all-to-all
operation but for performance reasons do not want to send data to various ranks
who don't need to receive any useful values. If it i
On 03/06/2012 03:59 PM, Kharche, Sanjay wrote:
Hi
I am working on a 3D ADI solver for the heat equation. I have implemented it as
serial. Would anybody be able to indicate the best and more straightforward way
to parallelise it. Apologies if this is going to the wrong forum.
If it's to be i
On Tue, Mar 6, 2012 at 16:23, Tim Prince wrote:
> On 03/06/2012 03:59 PM, Kharche, Sanjay wrote:
>
>> Hi
>>
>> I am working on a 3D ADI solver for the heat equation. I have implemented
>> it as serial. Would anybody be able to indicate the best and more
>> straightforward way to parallelise it.
On Tue, Mar 6, 2012 at 15:43, Timothy Stitt wrote:
> Can anyone tell me whether it is legal to pass zero values for some of the
> send count elements in an MPI_AlltoallV() call? I want to perform an
> all-to-all operation but for performance reasons do not want to send data
> to various ranks who
Thanks Jed for the advice. How well-implemented are the one-sided communication
routines? Are they something that could be trusted in a production code?
Sent from my iPad
On Mar 6, 2012, at 6:06 PM, "Jed Brown" mailto:j...@59a2.org>>
wrote:
On Tue, Mar 6, 2012 at 15:43, Timothy Stitt
mailto:t
Hello Ralph,
Thanks for your reply.
In order to start my job, I tried the following two ways
(1) configured/compiled open-mpi and compiled benchmark on head node.
submitted a pbs job.
(2) submitted an interactive job to redo config/compile on compute node.
And then used "/path/to/mpic
Wow - that's the ugliest configure line I think I've ever seen :-/
I note you have a --with-platform in the middle of it, which is really
unusual. Normally, you would put all that stuff in a platform file if
that's what you were going to do. Note that anything in the platform file
will override an
24 matches
Mail list logo