intel/orte'
make: *** [all-recursive] Error 1
Thanks,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
config.log.tbz
Description: Binary data
to detect missing libcrypto
From: Ralph Castain (rhc_at_[hidden])
List-Post: users@lists.open-mpi.org
Date: 2014-07-24 17:12:16
Previous message: Jeff Hammond: "[OMPI users] configure fails to
detect missing libcrypto"
In reply to: Jeff Hammond: "[OMPI users] configure fails to dete
g them?
>
> If we exclude GPU or other nonMPI solutions, and cost being a primary
> factor, what is progression path from 2boxes to a cloud based solution
> (amazon and the like...)
>
> Regards,
> MM
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
explained.
There's a nice paper on self-consistent performance of MPI implementations
that has lots of details.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
gt; us...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/06/29371.php
>>
>> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29372.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29495.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
__
> users mailing listus...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29498.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29499.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
was assuming KMP_AFFINITY was used
>
>
> so let me put it this way :
>
> do *not* use KMP_AFFINITY with mpirun -bind-to none, otherwise, you will
> very likely end up doing time sharing ...
>
>
> Cheers,
>
>
> Gilles
>
> On 6/22/2016 5:07 PM, Jeff Hammond wrote:
same latest_snapshot.txt thing there:
>>
>> wget
>> https://www.open-mpi.org/software/ompi/v2.x/downloads/latest_snapshot.txt
>> wget https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-`cat
>> <https://www.open-mpi.org/software/ompi/v2.x/downloads/openmpi-cat>
>> latest_snapshot.txt`.tar.bz2
>>
>>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/06/29519.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
__
>> users mailing list
>> us...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/07/29615.php
>>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29616.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29617.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
AX_INFO_KEY: 36
MPI_MAX_INFO_VAL: 256
MPI_MAX_PORT_NAME: 1024
MPI_MAX_DATAREP_STRING: 128
How do I extract configure arguments from an OpenMPI installation? I am
trying to reproduce a build exactly and I do not have access to config.log
from the origin build.
Thanks,
Jeff
--
Jeff Hammond
jeff.sc
is invalid.
>
>
>
>
>
> Huh. I guess I'd assumed that the MPI Standard would have made sure a
> declared communicator that hasn't been filled would have been an error to
> use.
>
>
>
> When I get back on Monday, I'll try out some other compil
as the same option. I
never need stdin to run MiniDFT (i.e. QE-lite).
Since both codes you name already have the correct workaround for stdin, I
would not waste any time debugging this. Just do the right thing from now
on and enjoy having your applications wo
ral Biology and Bioinformatics Division*
>> *CSIR-Indian Institute of Chemical Biology*
>>
>> *Kolkata 700032*
>>
>> *INDIA*
>>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
e mpi world, then only start the mpi framework once
it's needed?
>
> Regards,
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...
George:
http://mpi-forum.org/docs/mpi-3.1/mpi31-report/node422.htm
Jeff
On Sun, Oct 16, 2016 at 5:44 PM, George Bosilca wrote:
> Vahid,
>
> You cannot use Fortan's vector subscript with MPI.
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeff
ion 15.0.2.164
> OPEN-MPI 2.0.1
>
>
> T. Rosmond
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
world written in Fortran that could
> benefit greatly from this MPI-3 capability. My own background is in
> numerical weather prediction, and I know it would be welcome in that
> community. Someone knowledgeable in both C and Fortran should be able to
> get to bottom of it.
>
> T
On Mon, Nov 7, 2016 at 8:54 AM, Dave Love wrote:
>
> [Some time ago]
> Jeff Hammond writes:
>
> > If you want to keep long-waiting MPI processes from clogging your CPU
> > pipeline and heating up your machines, you can turn blocking MPI
> > collectives into nice
>
>
>
>1. MPI_ALLOC_MEM integration with memkind
>
> It would sense to prototype this as a standalone project that is
integrated with any MPI library via PMPI. It's probably a day or two of
work to get that going.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http:
Have you tried subcommunicators? MPI is well-suited to hierarchical
parallelism since MPI-1 days.
Additionally, MPI-3 enables MPI+MPI as George noted.
Your question is probably better suited for Stack Overflow, since it's not
implementation-specific...
Jeff
On Fri, Nov 25, 2016 at 3:34 AM Diego
well (http://www.pgroup.com/userforum/viewtopic.php?t=5413&start=0) since
> I'm not sure. But, no matter what, does anyone have thoughts on how to
> solve this?
>
> Thanks,
> Matt
>
> --
> Matt Thompson
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
ents of the typical effects of spinning and
> ameliorations on some sort of "representative" system?
>
>
None that are published, unfortunately.
Best,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
a60f50479
>>
>> The problem seems to have been one with the Xcode configuration:
>>
>> "It turns out my Xcode was messed up as I was missing /usr/include/.
>> After rerunning xcode-select --install it works now."
>>
>> On my OS X 10.11.6,
Center
> Lattes: http://lattes.cnpq.br/0796232840554652
>
>
>
> ___
> users mailing
> listus...@lists.open-mpi.orghttps://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
h Performance Computing Center Stuttgart (HLRS)
> > Nobelstr. 19
> > D-70569 Stuttgart
> >
> > Tel.: +49(0)711-68565890
> > Fax: +49(0)711-6856832
> > E-Mail: schuch...@hlrs.de
> >
> > ___
>
>>>>
>>>> Cheers
>>>> Joseph
>>>>
>>>> --
>>>> Dipl.-Inf. Joseph Schuchart
>>>> High Performance Computing Center Stuttgart (HLRS)
>>>> Nobelstr. 19
>>>> D-70569 Stuttgart
>>>>
>&g
d via IB is not a solution for
> >> multi-node jobs, huh).
> >
> > But it works OK with libfabric (ofi mtl). Is there a problem with
> > libfabric?
> >
> > Has anyone reported this issue to the cp2k people? I know it's not
> > their problem, but I
On Wed, Mar 15, 2017 at 5:44 PM Jeff Squyres (jsquyres)
wrote:
> On Mar 15, 2017, at 8:25 PM, Jeff Hammond wrote:
> >
> > I couldn't find the docs on mpool_hints, but shouldn't there be a way to
> disable registration via MPI_Info rather than patching the source
prior emails in this
> thread: "As always, experiment to find the best for your hardware and
> jobs." ;-)
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ___
> users mailing list
> users@lists.open-mpi.org
> http
>
Indeed, this is a problem. There is an effort to fix the API in MPI-4 (see
https://github.com/jeffhammond/bigmpi-paper) but as you know, there are
implementation defects that break correct MPI-3 programs that use datatypes
to workaround the limits of C int. We were able to find a bunch of
proble
326E4DD5D Unknown Unknown Unknown
> a.out 000000403769 Unknown Unknown Unknown
>
> _
> *SAVE WATER ** ~ **SAVE ENERGY**~ **~ **SAVE EARTH *[image:
> Earth-22-june.gif (7996 bytes)]
>
> http://sites.google.com/site/kolukulasivasrinivas/
>
> Siva Srinivas Kolukula, PhD
> *Scientist - B*
> Indian Tsunami Early Warning Centre (ITEWC)
> Advisory Services and Satellite Oceanography Group (ASG)
> Indian National Centre for Ocean Information Services (INCOIS)
> "Ocean Valley"
> Pragathi Nagar (B.O)
> Nizampet (S.O)
> Hyderabad - 500 090
> Telangana, INDIA
>
> Office: 040 23886124
>
>
> *Cell: +91 9381403232; +91 8977801947*
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
t; Is there any way that I can compile and link the code using Open MPI 1.6.2?
>
> Thanks,
> Arham Amouei
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail
Has this error i.e. double free or corruption been reported by others ?
>> Is there a Is a
>>
>> bug fix available ?
>>
>> Regards,
>>
>> Ashwin.
>>
>>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
nMPI's mpirun in the following
>>>
>>> way
>>>
>>> mpirun -np 4 cfd_software
>>>
>>> and I get double free or corruption every single time.
>>>
>>> I have two questions -
>>>
>>> 1) I am unable to captu
t; > > > >>>> ___
> > > > >>>> users mailing list
> > > > >>>> users@lists.open-mpi.org <mailto:users@lists.open-mpi.org>
> > > > >>>
. Thanks
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing
eded.
>>>
>>> Best
>>> Joseph
>>> --
>>> Dipl.-Inf. Joseph Schuchart
>>> High Performance Computing Center Stuttgart (HLRS)
>>> Nobelstr. 19
>>> D-70569 Stuttgart
>>>
>>> Tel.: +49(0)711-68565890
>>> Fax: +49(0)711-6856832
>>> E-Mail: schuch...@hlrs.de
>>>
>>> ___
>>> users mailing list
>>> users@lists.open-mpi.org
>>> https://lists.open-mpi.org/mailman/listinfo/users
>>>
>> ___
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>>
>>
>
> --
> Dipl.-Inf. Joseph Schuchart
> High Performance Computing Center Stuttgart (HLRS)
> Nobelstr. 19
> D-70569 Stuttgart
>
> Tel.: +49(0)711-68565890
> Fax: +49(0)711-6856832
> E-Mail: schuch...@hlrs.de
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
gt; supports allocations up to 60GB, so my second point reported below may be
> invalid. Number 4 seems still seems curious to me, though.
>
> Best
> Joseph
>
> On 08/25/2017 09:17 PM, Jeff Hammond wrote:
>
>> There's no reason to do anything special for shared memory
uld it be possible to use anonymous shared memory
> mappings to avoid the backing file for large allocations (maybe above a
> certain threshold) on systems that support MAP_ANONYMOUS and distribute the
> result of the mmap call among the processes on the node?
>
> Thanks,
> Joseph
&g
higher communication latencies as well).
> >>
> >> Regarding the size limitation of /tmp: I found an opal/mca/shmem/posix
> >> component that uses shmem_open to create a POSIX shared memory object
> >> instead of a file on disk, which is then mmap'ed. Unfortuna
t; _______
> >> users mailing list
> >> users@lists.open-mpi.org
> >> https://lists.open-mpi.org/mailman/listinfo/users
> >
> >
> >
> > ___
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
t;>
>> >> In general, what is clean way to build OpenMPI with a GNU compiler set
>> but
>> >> then instruct the wrappers to use Intel compiler set?
>> >>
>> >> Thanks!
>> >> Michael
>> >>
>> >> __
and that someone
will deliberately disobey the previous request because I made it.
--
Jeff Hammond
jeff.scie...@gmail.com
config.log.tbz
Description: Binary data
__
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Hammond
jeff.scie...@gmail.com
ny help!
>
> Sincerely,
> Erin
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Hammond
jeff.scie...@gmail.com
unt[i] = bufsize;
>
> displ[i] = bufsize*i;
>
> }
>
> for (i=0;i
> sbuf[i] = myid+i;
>
> printf("buffer size: %ld recvCount[0]:%d last displ
> index:%d\n",bufsize,recvCount[0],displ[nproc-1]);
>
> fflush(stdout);
>
>
> MPI_Allgatherv(sbuf,recvCount[0], MPI_INT,rbuf,recvCount,displ,MPI_INT,
>
> MPI_COMM_WORLD);
>
>
> printf("OK\n");
>
> fflush(stdout);
>
> MPI_Finalize();
>
> return 0;
>
> }
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Hammond
jeff.scie...@gmail.com
Line
> Source
> teststdin 0040BF48 Unknown Unknown
Unknown
>
>
>
> mpirun -mca mtl ^psm -mca btl self,sm -np 8 ./teststdin < /tmp/a
>
> id0
> Process0 says "Hello, world!"
> READ from stdin
> zer
>
> Process1 says "Hello, world!"
> ...
>
>
>
> Is it a known problem ?
>
> Fabrice BOYRIE
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Hammond
jeff.scie...@gmail.com
You try Google Scholar yet?
Always exhaust all nonhuman resources before requesting human
assistance. The human brain is a terrible resource to waste when a
computer can do the job.
Jeff
Sent from my iPhone
On Oct 3, 2013, at 10:18 AM, Yin Zhao wrote:
> Hi all,
>
> Does anybody have done expe
>
>
>
> OpenMPI-1.6.5 works with my code when I pass the same array to SEND_BUF and
> RECV_BUF instead of using MPI_IN_PLACE for those same GATHERV, AGATHERV, and
> SCATTERVs.
>
>
>
>
>
> -Charles
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Hammond
jeff.scie...@gmail.com
than I tested.
Squyres told me some thread-oriented refactoring was going on. All of
this was over a year ago so it is entirely reasonable for me to be
wrong about all of this.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
the MPI implementations of interest. The
> additional threads referred to are "inside the MPI rank," although I suppose
> additional application threads not involved with MPI are possible.
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Hammond
jeff.scie...@gmail.com
and -fdefault-integer-8
>
> Ok. I was using icc/ifort with -m64 and -i8.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Hammond
jeff.scie...@gmail.com
That's a relatively old version of OMPI. Maybe try the latest
release? That's always the safe bet since the issue might have been
fixed already.
I recall that OMPI uses ROMIO so you might try to reproduce with MPICH
so you can report it to the people that wrote the MPI-IO code. Of
course, this mi
MPI_{Gather,Allgather}v are appropriate for this. See docs for details.
Jeff
Sent from my iPhone
On Nov 9, 2013, at 6:15 PM, Saliya Ekanayake wrote:
Hi,
I want to contact bunch of strings from MPI processes. For example, say
with 2 processes,
P1 has text "hi"
P2 has text "world"
I have thes
____
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Hammond
jeff.scie...@gmail.com
l Hugo Cámpora Pérez
> European Organization for Nuclear Research (CERN)
> PH LBC, LHCb Online Fellow
> e-mail. dcamp...@cern.ch
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Hammond
jeff.scie...@gmail.com
Try pure PGI and pure GCC builds. If only the mixed one fails, then I saw a
problem like this in MPICH a few months ago. It appears PGI does not play nice
with GCC regarding the C standard library functions. Or at least that's what I
concluded. The issue remains unresolved.
Jeff
Sent from my
Unique to each process?
Try this:
int rank;
MPI_Comm_rank(MPI_COMM_WORLD, &rank);
uint64_t unique = rank;
To get additional unique values:
int size;
MPI_Comm_size(MPI_COMM_WORLD, &size);
unique += size;
If this isn't insufficient, please ask to question differently.
There is no canonical met
One sided is quite simple to understand. It is like file io. You read/write
(get/put) to a memory object. If you want to make it hard to screw up, use
passive target bss wrap you calls in lock/unlock so every operation is globally
visible where it's called.
I've never deadlocked RMA while p2p
gt; 2) On any random processor (say rank X), I calculate the two integer values,
> Y and Z. (0<=Y 3) On processor X, I want to get the value of A(Z) on processor Y.
>
>
> This operation will happen parallely on each processor. Can anyone please
> help me with this?
>
>
&
y examining your openmpi
> build logs to see if it builds for both __float80 and __float128 (or
> neither). gfortran has a 128-bit data type (software floating point
> real(16), corresponding to __float128); you should be able to see in the
> build logs whether that data type was used.
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Jeff Hammond
jeff.scie...@gmail.com
al/cri/
>>>
>>> ___
>>> discuss mailing list disc...@mpich.org
>>> To manage subscription options or unsubscribe:
>>> https://lists.mpich.org/mailman/listinfo/discuss
>>
>> --
>> Pavan Balaji ✉️
>> http://www.mcs.anl.gov/~balaji
>>
>> ___
>> discuss mailing list disc...@mpich.org
>> To manage subscription options or unsubscribe:
>> https://lists.mpich.org/mailman/listinfo/discuss
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> ___
> discuss mailing list disc...@mpich.org
> To manage subscription options or unsubscribe:
> https://lists.mpich.org/mailman/listinfo/discuss
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
meone knows an easy way to get a late-model OpenMPI in Travis using a
method other than what I've indicated above, by all means suggest that. I
am still new to Travis CI and would be happy to learn new things.
Thanks,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
.
Sorry for the lack of explicit context in this reply, but I am signed up to
this list (and many others) in no-email mode.
Best,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
ut the warning message (
https://github.com/hpc/cce-mpi-openmpi-1.6.4/blob/master/ompi/mca/common/sm/help-mpi-common-sm.txt)
suggested that I could override it by setting TMP, TEMP or TEMPDIR, which I
did to no avail.
Thanks,
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
nclude=sockets
>
>
Thanks. I'm glad that there is an option to set them this way.
> In the spirit OMPI - may the force be with you.
>
>
All I will say here is that Open-MPI has a Vader BTL :-)
>
> > On Thu 19.11.2015 09:44:20 Jeff Hammond wrote:
> > >
h mpich setup as well,
> hence
> the suggestion for the mpich shm mechanism.
>
>
The first OSS implementation of MPI that I can use on Cray XC using OFI
gets a prize at the December MPI Forum.
Best,
Jeff
> Howard
>
>
>
> 2015-11-19 16:59 GMT-07:00 Jeff Hammond :
>
&g
hp
>
>
>
> --
> Kind regards Nick
>
>
>
> ___
>
> users mailing list
>
> us...@open-mpi.org
>
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/11/28056.php
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/11/28098.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/11/28099.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
t; I’m seeing similar failures for mvapich2-2.1 and mpich-3.2. Does anyone
> know if this stuff is suppose to work? I’ve had pretty good luck using the
> original RMA calls (MPI_Put, MPI_Get and MPI_Accumulate) with
> MPI_Lock/MPI_Unlock but the request-based calls are mostly a complete
> failure.
>
>
>
> Bruce Palmer
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
il.so.1 (0x7f4f25d21000)
> libpthread.so.0 => /lib64/libpthread.so.0 (0x7f4f25b04000)
> libc.so.6 => /lib64/libc.so.6 (0x7f4f2577)
> /lib64/ld-linux-x86-64.so.2 (0x7f4f266cf000)
> il...@grid.ui.savba.sk:~/bin/openmpi-1.10.1_pgi_static/bin/.
>
>
> Any help, please ?
>
> Miro
>
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
fix, but you should
nonetheless be unforgiving of whoever broke your user experience.
Jeff
Sent from my iPhone
> On Dec 31, 2015, at 3:10 PM, Matt Thompson wrote:
>
>> On Thu, Dec 31, 2015 at 4:37 PM, Jeff Hammond wrote:
>> Try using the same LDFLAGS for PGI. I think you go
m thread 6 out of 7 from process 1 out of 4 on borgo035 on
>>>>>> CPU 6
>>>>>> > Hello from thread 0 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 14
>>>>>> > Hello from thread 0 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 14
>>>>>> > Hello from thread 1 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 15
>>>>>> > Hello from thread 1 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 15
>>>>>> > Hello from thread 2 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 16
>>>>>> > Hello from thread 2 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 16
>>>>>> > Hello from thread 3 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 17
>>>>>> > Hello from thread 3 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 17
>>>>>> > Hello from thread 4 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 18
>>>>>> > Hello from thread 4 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 18
>>>>>> > Hello from thread 5 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 19
>>>>>> > Hello from thread 5 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 19
>>>>>> > Hello from thread 6 out of 7 from process 2 out of 4 on borgo035 on
>>>>>> CPU 20
>>>>>> > Hello from thread 6 out of 7 from process 3 out of 4 on borgo035 on
>>>>>> CPU 20
>>>>>> >
>>>>>> > Obviously not right. Any ideas on how to help me learn? The man
>>>>>> mpirun page
>>>>>> > is a bit formidable in the pinning part, so maybe I've missed an
>>>>>> obvious
>>>>>> > answer.
>>>>>> >
>>>>>> > Matt
>>>>>> > --
>>>>>> > Matt Thompson
>>>>>> >
>>>>>> > Man Among Men
>>>>>> > Fulcrum of History
>>>>>> >
>>>>>> >
>>>>>> > ___
>>>>>> > users mailing list
>>>>>> > us...@open-mpi.org
>>>>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> > Link to this post:
>>>>>> > http://www.open-mpi.org/community/lists/users/2016/01/28217.php
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Erik Schnetter
>>>>>> http://www.perimeterinstitute.ca/personal/eschnetter/
>>>>>> ___
>>>>>> users mailing list
>>>>>> us...@open-mpi.org
>>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> Link to this post:
>>>>>> http://www.open-mpi.org/community/lists/users/2016/01/28218.php
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Kind regards Nick
>>>>> ___
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>> Link to this post:
>>>>> http://www.open-mpi.org/community/lists/users/2016/01/28219.php
>>>>>
>>>>>
>>>>>
>>>>> ___
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>> Link to this post:
>>>>> http://www.open-mpi.org/community/lists/users/2016/01/28221.php
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Matt Thompson
>>>>
>>>> Man Among Men
>>>> Fulcrum of History
>>>>
>>>>
>>>> ___
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2016/01/28223.php
>>>>
>>>
>>>
>>>
>>> --
>>> Kind regards Nick
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/01/28224.php
>>>
>>>
>>>
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/01/28226.php
>>>
>>
>>
>>
>> --
>> Kind regards Nick
>> ___
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/01/28227.php
>>
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/01/28228.php
>>
>
>
>
> --
> Kind regards Nick
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28229.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
7;ve been trying to
solve the latter problem for years and have made very little progress as
far as the spec goes.
Related work:
- http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf
-
http://www.orau.gov/hpcor2015/whitepapers/Exascale_Computing_without_Threads-Barry_Smith.pdf
Do not feed the trolls ;-)
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
http://www.open-mpi.org/community/lists/users/2016/01/28216.php
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
gt;
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
http://www.open-mpi.org/community/lists/users/2016/01/28250.php
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
s,
>
> Gilles
>
>
> On Wed, Jan 13, 2016 at 9:37 AM, Jeff Hammond
> wrote:
> > Example 4.23 of MPI 3.1 (it is hardly a new example, but may have a
> > different number in older versions) demonstrates the use of
> > (buffer=NULL,count=0,type=MPI_DATATYPE_NULL). W
6 at 8:25 PM, Jim Edwards wrote:
>
>> Hi Gilles,
>>
>> I think that your conversation with Jeff pretty much covered it but
>> your understanding of my original problem is correct.
>> Thanks for the prompt response and the PR.
>>
>> On Tue, Jan 12, 2016 a
> int main(int argc, char **argv)
>> {
>> int rank, ntasks;
>>
>> MPI_Init(&argc, &argv);
>>
>> MPI_Comm_rank(MPI_COMM_WORLD,&rank);
>> MPI_Comm_size(MPI_COMM_WORLD, &ntasks);
>>
>> printf("rank %d ntasks %d\n",rank, ntasks);
>>
>> my_mpi_test(rank,ntasks);
>>
>>
>> MPI_Finalize();
>> }
>>
>>
>>
>>
>
>
> --
> Jim Edwards
>
> CESM Software Engineer
> National Center for Atmospheric Research
> Boulder, CO
>
>
> ___
> users mailing listus...@open-mpi.org
>
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28258.php
>
>
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
gt; i still believe George interpretation is the correct one, and Bill
>> Gropp agreed with him.
>>
>>
>> and btw, is example 4.23 correct ?
>> /* fwiw, i did copy/paste it and found several missing local variable
>> myrank, i, and comm
>> and i'd rather h
e it's a good idea to say that null buffers have a well-defined MPI
type.
Jeff
> Cheers,
> George.
>
>
>
> Cheers,
>
> Gilles
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://
t_base at once.
>
> Thanks and Regards
> Udayanga
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.or
On Thu, Jan 21, 2016 at 4:07 AM, Dave Love wrote:
>
> Jeff Hammond writes:
>
> > Just using Intel compilers, OpenMP and MPI. Problem solved :-)
> >
> > (I work for Intel and the previous statement should be interpreted as a
> > joke,
>
> Good!
>
> &g
iption: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/01/28334.php
>>
>
>
>
> --
> Kind regards Nick
>
>
> ___
> users mailing listus...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28336.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28337.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
lementation from user space. This
includes Open-MPI, MPICH, and MVAPICH2. If you like Open-MPI, try this:
cd $OMPI_DIR && mkdir build && cd build && ../configure
--prefix=$HOME/ompi-install && make -j && make install
...or something like that. I'
_
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28438.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
ich were not seen on other systems.
>
>
If you use an MPI library compiled without Fortran support, you should
expect precisely nothing related to Fortran to work. You might get more
than this because the universe is being nice to you, but you should treat
it as serendipity when something works
said, should we remove these fortran types from include files
> and libs when ompi is configure'd without fortran support ?
>
> Cheers,
>
> Gilles
>
> Jeff Hammond wrote:
>
>>
>> > BTW: is there a reason you don't want to just use the C datatypes?
n MPI is not compliant because it doesn't have Fortran datatypes
available in C!").
>> >
>> > Gilles: do you want to poke around and see if you can make any of
Jeff's suggestions work out nicely? (i.e., give some kind of compile/link
error that states that Open
ion here is not about bindings, but about a predefined datatype, a
> case where I don't think the text applies.
>
> George.
>
>
> On Tue, Feb 9, 2016 at 6:17 PM, Jeff Hammond > wrote:
>
>> "MPI-3.0 (and later) compliant Fortran bindings are not only a
mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/02/28485.php
>>
>
> ___
> users mai
t; the other ranks get valid pointers, except rank 0.
>>>
>>> Best regards,
>>> Peter
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> Subscription: http://www.open-mpi.org/mailman/l
s the rank to get the pointer for the first non-zero
> sized segment in the shared memory window.
Indeed! I forgot about that. MPI_Win_shared_query solves this problem for
the user brilliantly.
Jeff
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
=0x100fa2000
On Thu, Feb 11, 2016 at 8:55 AM, Jeff Hammond
wrote:
>
>
> On Thu, Feb 11, 2016 at 8:46 AM, Nathan Hjelm wrote:
> >
> >
> > On Thu, Feb 11, 2016 at 02:17:40PM +, Peter Wind wrote:
> > >I would add that the present situation is bound to
base=0x10aac1000
> >
> > query: me=2, them=2, size=4, disp=1, base=0x10aac1004
> >
> > query: me=2, them=3, size=4, disp=1, base=0x10aac1008
> >
> > query: me=2, them=PROC_NULL, size=4, disp=1, base=0x10aac1000
> >
> > query: me=3,
; query: me=1, them=PROC_NULL, size=4, disp=1, base=0x102d3b000
> > >
> > > query: me=2, them=0, size=0, disp=1, base=0x10aac1000
> > >
> > > query: me=2, them=1, size=4, disp=1, base=0x10aac1000
> > >
> > > query: me=2, them=2, size=4, d
is backwards incompatibility issue an Open MPI bug?
>>>>
>>>> 2) Can I expect that my binary will work with future mpiexec
>>>> versions >= 1.10 (which it was built with)?
>>>>
>>>> Thanks and best regards,
>>>> Kim
>>>>
>>>> ___
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2016/02/28522.php
>>>>
>>>
>>>
>>>
>>> --
>>> Kind regards Nick
>>>
>>
>>
>
>
> --
> Kind regards Nick
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/02/28525.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/03/28696.php
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
his.
>
> and by the way, does Open MPI able to checkpoint or restart mpi
> application/GROMACS automatically ?
> Please, I really need help.
>
> Regards,
>
>
> Husen
>
--
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
rate process running in node A to other node, let's say to
> node C.
> is there a way to do this with open MPI ? thanks.
>
> Regards,
>
> Husen
>
>
>
>
> On Wed, Mar 16, 2016 at 12:37 PM, Jeff Hammond > wrote:
>
>> Why do you need OpenMPI to do th
1 - 100 of 157 matches
Mail list logo