into different communicators and this pattern breaks that
logic.
Is there an alternative approach to doing this?
Thank you,
Saliya
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
Indiana Universi
e:- $OMPI_*)
> to its ranks. So I believe you may use $OMPI_COMM_WORLD_LOCAL_RANK to
> specifically filter out parameters within the script.
>
> Regards
> Udayanga Wickramasinghe
> Research Assistant
> School of Informatics and Computing | CREST
> Indiana University, Bloomingt
nvironment variables (ie:- $OMPI_*)
> to its ranks. So I believe you may use $OMPI_COMM_WORLD_LOCAL_RANK to
> specifically filter out parameters within the script.
>
> Regards
> Udayanga Wickramasinghe
> Research Assistant
> School of Informatics and Computing | CREST
> Indiana University, Bloomington
I tested and the number of ranks in world comm is correct. I couldn't find
the bug that causes the program to produce erroneous answers when this
scheme is used, though.
On Fri, Jul 29, 2016 at 3:38 PM, Saliya Ekanayake wrote:
> Thank you, that's good to know.
>
> Yes, tes
rplus/Greenplum_RalphCastain-1up.pdf, but
wonder if there's some detailed steps on getting a simple MR program
running with OpenMPI.
Thank you,
Saliya
On Mon, Feb 24, 2014 at 1:22 PM, Saliya Ekanayake wrote:
> Thank you Ralph. I'll get back to you if I run into issues.
>
>
> On Mo
t 0]]:
[B/B/B/B][./././.]*
*[i52:31765] MCW rank 1 bound to socket 0[core 0[hwt 0]], socket 0[core
1[hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]]:
[B/B/B/B][./././.]*
Is there a better way without using -cpus-per-proc as suggested to get the
same effect?
Thank you,
Saliya
--
Saliya
> > [i52:31765] MCW rank 1 bound to socket 0[core 0[hwt 0]], socket 0[core 1
> [hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]]:
> [B/B/B/B][./././.]
> >
> >
> > Is there a better way without using -cpus-per-pr
ore 3[hwt 0]]:
> > [B/B/B/B][./././.]
> > > [i52:31765] MCW rank 1 bound to socket 0[core 0[hwt 0]], socket 0[core
> 1
> > [hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]]:
> > [B/B/B/B][./././.]
> > >
> > >
> > >
ons plus kmeans
clustering and matrix multiplication.
If possible I'd like to contribute these to OpenMPI and wonder what's your
input on this.
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
; Thanks!
> Ralph
>
> On Apr 3, 2014, at 1:44 PM, Saliya Ekanayake wrote:
>
> Hi,
>
> I've been working on some applications in our group where I've been using
> OpenMPI Java binding. Over the course of this work, I've accumulated
> several samples that I w
faq/?category=java).
>
>
> On Apr 3, 2014, at 7:09 PM, Saliya Ekanayake wrote:
>
> > Great. I will cleanup and send you a tarball.
> >
> > Thank you
> > Saliya
> >
> > On Apr 3, 2014 5:51 PM, "Ralph Castain" wrote:
> > We'd be happ
to speed up these Tx*1*xN
cases? Also, I expected B to perform better than A as threads could utilize
all 8 cores, but it wasn't the case.
Thank you,
Saliya
[image: Inline image 1]
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
ry, and so messaging will run slower -
> and you want the ranks that share a node to be the ones that most
> frequently communicate to each other, if you can identify them.
>
> HTH
> Ralph
>
> On Apr 10, 2014, at 5:59 PM, Saliya Ekanayake wrote:
>
> Hi,
>
> I am eval
Just an update. Yes, binding to all is as same as binding to none. I was
mistaken by my memory :)
On Fri, Apr 11, 2014 at 1:22 AM, Saliya Ekanayake wrote:
> Thank you Ralph for the details and it's a good point you mentioned on
> mapping by node vs socket. We have another program
f you
could give some suggestions on how to build OpenMPI with Gemini support.
[1]
https://www.open-mpi.org/papers/cug-2012/cug_2012_open_mpi_for_cray_xe_xk.pdf
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
http://saliya.org
luster. Unfortunately, due to a Cray bug, case 80503, that has
> not yet worked.
> Ray
>
>
> On 4/16/2014 4:44 PM, Saliya Ekanayake wrote:
>
> Hi,
>
> We have a Cray XE6/XK7 supercomputer (BigRed II) and I was trying to get
> OpenMPI Java b
http://saliya.org
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> > _______
> > users mailing list
rs mailing list
> > us...@open-mpi.org
> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> > Link to this post:
> http://www.open-mpi.org/community/lists/users/2014/07/24870.php
>
>
> --
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to:
> http://www.cisco.com/web/about/doing_business/legal/cri/
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2014/07/24874.php
>
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
n and send it
I guess 2 would internally do the copying to a buffer and use it, so
suggesting 1. is the best option. Is this the case or is there a better way
to do this?
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
http://saliya.org
Thank you Oscar for the detailed information, but I'm still wondering how
would the copying in 2 would be different than what's done here with
copying to a buffer.
On Fri, Aug 22, 2014 at 2:17 PM, Oscar Vega-Gisbert
wrote:
> El 22/08/14 17:10, Saliya Ekanayake escribió:
>
>
s for copying?
Thank you,
Saliya
On Fri, Aug 22, 2014 at 3:24 PM, Oscar Vega-Gisbert
wrote:
> El 22/08/14 20:44, Saliya Ekanayake escribió:
>
> Thank you Oscar for the detailed information, but I'm still wondering how
>> would the copying in 2 would be different t
Yes, these are all MPI_DOUBLE
On Fri, Aug 22, 2014 at 3:38 PM, Rob Latham wrote:
>
>
> On 08/22/2014 10:10 AM, Saliya Ekanayake wrote:
>
>> Hi,
>>
>> I've a quick question about the usage of Java binding.
>>
>> Say there's a 2 dimensional d
Please find inline comments.
On Fri, Aug 22, 2014 at 3:45 PM, Rob Latham wrote:
>
>
> On 08/22/2014 02:40 PM, Saliya Ekanayake wrote:
>
>> Yes, these are all MPI_DOUBLE
>>
>
> well, yeah, but since you are talking about copying into a "direct buffer"
. The ompi_info is attached.
2. cd to examples directory and mpicc hello_c.c
3. mpirun -np 2 ./a.out
4. Error text is attached.
Please let me know if you need more info.
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliy
iya,
>
> Would you mind trying to reproduce the problem using the latest 1.8
> release - 1.8.3?
>
> Thanks,
>
> Howard
>
>
> 2014-11-04 11:10 GMT-07:00 Saliya Ekanayake :
>
>> Hi,
>>
>> I am using OpenMPI 1.8.1 in a Linux cluster that we recently se
p/
>
>
>
> On Nov 4, 2014, at 1:10 PM, Saliya Ekanayake wrote:
>
> > Hi,
> >
> > I am using OpenMPI 1.8.1 in a Linux cluster that we recently setup. It
> builds fine, but when I try to run even the simplest hello.c program it'll
> cause a segfault. Any
Hi Jeff,
You are probably busy, but just checking if you had a chance to look at
this.
Thanks,
Saliya
On Thu, Nov 6, 2014 at 9:19 AM, Saliya Ekanayake wrote:
> Hi Jeff,
>
> I've attached a tar file with information.
>
> Thank you,
> Saliya
>
> On Tue, Nov 4,
dditional information to give a clue as to what is happening. :-(
>
>
>
> On Nov 9, 2014, at 11:43 AM, Saliya Ekanayake wrote:
>
> > Hi Jeff,
> >
> > You are probably busy, but just checking if you had a chance to look at
> this.
> >
> > Thanks,
>
Hi,
Is it possible to get information on the process affinity that's set in
mpirun command within the MPI program? For example I'd like to know the
number of cores that a given rank is bound to.
Thank you
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Infor
k you,
Saliya
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
Indiana University, Bloomington
Cell 812-391-4914
http://saliya.org
ou specified - I believe by default
> we bind to socket when mapping by socket. If you want them bound to core,
> you might need to add —bind-to core.
>
> I can take a look at it - I *thought* we had reset that to bind-to core
> when PE=N was specified, but maybe that got lost.
&
Thank you and one last question. Is it possible to avoid a core and
instruct OMPI to use only the other cores?
On Mon, Dec 22, 2014 at 2:08 PM, Ralph Castain wrote:
>
> On Dec 22, 2014, at 10:45 AM, Saliya Ekanayake wrote:
>
> Hi Ralph,
>
> Yes the report bindings show the
_object_t *)
(&m->cm_recv_msg_queue))->obj_magic_id' failed.
Thank you,
Saliya
On Mon, Nov 10, 2014 at 10:01 AM, Saliya Ekanayake
wrote:
> Thank you Jeff, I'll try this and let you know.
>
> Saliya
> On Nov 10, 2014 6:42 AM, "Jeff Squyres (jsquyres)"
> wr
btl_openib_connect_udcm.c:736: udcm_module_finalize:
> Assertion `((0xdeafbeedULL << 32) + 0xdeafbeedULL) == ((opal_object_t *)
> (&m->cm_recv_msg_queue))->obj_magic_id' failed.
>
> Thank you,
> Saliya
>
> On Mon, Nov 10, 2014 at 10:01 AM, Saliya Ekanayake
> w
What I heard from the administrator is that,
"The tests that work are the simple utilities ib_read_lat and ib_read_bw
that measures latency and bandwith between two nodes. They are part of
the "perftest" repo package."
On Dec 28, 2014 10:20 AM, "Saliya Ekanayake&qu
the ibv_ud_pingpong test - that will exercise
> the portion of the system under discussion.
>
>
> On Dec 28, 2014, at 2:31 PM, Saliya Ekanayake wrote:
>
> What I heard from the administrator is that,
>
> "The tests that work are the simple utilities ib_read_lat and ib_read_bw
>
the test worked, but you are still encountering an error
> when executing an MPI job? Or are you saying things now work?
>
>
> On Dec 28, 2014, at 5:58 PM, Saliya Ekanayake wrote:
>
> Thank you Ralph. This produced the warning on memory limits similar to [1]
> and setting ul
nfigure Open MPI with
> --enable-mpi-ext=affinity or --enable-mpi-ext=all). See:
>
> http://www.open-mpi.org/doc/v1.8/man3/OMPI_Affinity_str.3.php
>
>
>
> On Dec 21, 2014, at 1:57 AM, Saliya Ekanayake wrote:
>
> > Hi,
> >
> > Is it possible to get info
rtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Thank you,
Saliya
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
Indiana University, Bloomington
Cell 812-391-4914
http://saliy
ing? If not latest, is it possible to
> upgrade to latest OFED?. Otherwise, Can you try latest OMPI release (>=
> v1.8.4), where this warning is ignored on older OFEDs
>
> -Devendar
>
> On Sun, Feb 8, 2015 at 12:37 PM, Saliya Ekanayake
> wrote:
>
>> Hi,
>>
[hwt 0]], socket 0[core 2[hwt 0]], socket 0[core 3[hwt 0]], socket 0[core
4[hwt 0]], socket 0[core 5[hwt 0]]:
[B/B/B/B/B/B][./././././.][./././././.][./././././.]
Thank you,
Saliya
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
.
> As the warning indicates, it can impact performance but won't stop you from
> running
>
>
> On Mar 12, 2015, at 12:51 PM, Saliya Ekanayake wrote:
>
> Hi,
>
> I am getting the following binding warning and wonde
Thank you. It worked!
On Fri, Mar 13, 2015 at 10:37 AM, Ralph Castain wrote:
> You shouldn’t have to do so
>
> On Mar 13, 2015, at 7:14 AM, Saliya Ekanayake wrote:
>
> Thanks Ralph. Do I need to specify where to find numactl-devel when
> compiling OpenMPI?
>
> On Thu,
nd-to socket
My understanding is that this will give each process 4 cores. Now, with
bind to socket, does that mean it's possible that within a socket the
assgined 4 cores for a process may change? Or will they stay in the same 4
cores always?
Thank you,
Saliya
--
Saliya Ekanayake
Ph.D.
3:46 PM, Ralph Castain wrote:
> Perhaps we should error out, but at the moment, PE=4 forces bind-to-core
> and so the bind-to socket is being ignored
>
> On May 19, 2016, at 12:06 PM, Saliya Ekanayake wrote:
>
> Hi,
>
> I understand --map-by will determine the process
t is what we will do - as I said, the
> —bind-to socket directive will be ignored.
>
> On May 19, 2016, at 1:03 PM, Saliya Ekanayake wrote:
>
> So if bind-to-core is in effect, does that mean it'll run only on 1 core
> even though I'd like it to be able to utilize 4 core
like to pin them to each core the process has been bound to.
> >
> > On Thu, May 19, 2016 at 3:46 PM, Ralph Castain wrote:
> > Perhaps we should error out, but at the moment, PE=4 forces bind-to-core
> and so the bind-to socket is being ignored
> >
> > On May 19, 2016, a
> It is true that we generally configure our schedulers to set the max
> #slots on each node to equal the #cores on the node - but that is purely a
> configuration choice.
>
>
> On May 19, 2016, at 4:29 PM, Saliya Ekanayake wrote:
>
> Thank you, Tetsuya. So is a slot = core?
/www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/05/29285.php
>>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/05/29288.php
>
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
Indiana University, Bloomington
LibPath.java
Description: Binary data
Hi,
I ran Ohio micro benchmarks for openmpi and noticed broadcast with smaller
number of bytes is faster than a barrier - 2us vs 120us.
I'm trying to understand how this could happen?
Thank you
Saliya
e times?
>
> Thanks,
>
> Matthieu
>
> --
> *From:* users [users-boun...@open-mpi.org] on behalf of Saliya Ekanayake [
> esal...@gmail.com]
> *Sent:* Monday, May 30, 2016 7:53 AM
> *To:* Open MPI Users
> *Subject:* [OMPI users] Broadcast faster
l.
>
> Cheers,
>
> Gilles
>
>
> On Monday, May 30, 2016, Saliya Ekanayake wrote:
>
>> Hi,
>>
>> I ran Ohio micro benchmarks for openmpi and noticed broadcast with
>> smaller number of bytes is faster than a barrie
f Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi
Hi,
I am trying to understand this peculiar behavior where the communication
time in OpenMPI changes depending on the number of process elements (cores)
the process is bound to.
Is this expected?
Thank you,
saliya
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics
o time sharing.
> but if the task is bound on more than one core, then the task and the
> helper run in parallel.
>
>
> Cheers,
>
> Gilles
>
> On 6/23/2016 1:21 PM, Saliya Ekanayake wrote:
>
> Hi,
>
> I am trying to understand this peculiar behavior where
process, i guess case 1 and case 2
> will become pretty close.
>
> i also suggest that for cases 2 and 3, you bind processes to a socket
> instead of no binding at all
>
> Cheers,
>
> Gilles
>
> On 6/23/2016 2:41 PM, Saliya Ekanayake wrote:
>
> Thank you, Gilles for
Core 4?
P.S. I can manually achieve this within the program using
*sched_setaffinity()*, but that's not portable.
Thank you,
Saliya
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
Indiana University, Bloomington
OCAL_RANK
> envar, and then use that to calculate the offset location for your threads
> (i.e., local rank 0 is on socket 0, local rank 1 is on socket 1, etc.). You
> can then putenv the correct value of the GOMP envar
>
>
> On Jun 28, 2016, at 8:40 PM, Saliya Ekanayake wrote:
>
's version
seem to support that, though.
On Wed, Jun 29, 2016 at 1:20 AM, Saliya Ekanayake wrote:
> Thank you, Ralph and Gilles.
>
> I didn't know about the OMPI_COMM_WORLD_LOCAL_RANK variable. Essentially,
> this means I should be able to wrap my application call in a
Hi,
I see in *mca_coll_sm_comm_query()* of *ompi/mca/coll/sm/coll_sm_module.c*
that al allreduce and bcast have shared memory implementations.
Is there a way to know if this implementation is being used when running my
program that calls these collectives?
Thank you,
Saliya
--
Saliya
aliya
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
Indiana University, Bloomington
r is an inter
> communicator or the communicator spans on several nodes.
>
> you can have a look at the source code, and you will not that bcast does
> not use send/recv. instead, it uses a shared memory, so hopefully, it is
> faster than other modules
>
>
> Cheers,
>
>
> Gilles
.
Cheers,
Gilles
On Thursday, June 30, 2016, Saliya Ekanayake wrote:
> Thank you, Gilles.
>
> What is the bcast I should look for? In general, how do I know which
> module was used to for which communication - can I print this info?
> On Jun 30, 2016 3:19 AM, "Gilles Gouaillardet
> (and libfabric, but I do not know the details...)
>
> Cheers,
>
> Gilles
>
> On Thursday, June 30, 2016, Saliya Ekanayake wrote:
>
>> OK, I am beginning to see how it works now. One question I still have is,
>> in the case of a mult-node communicator it seems coll/tuned (o
try
> mpirun --mca coll_ml_priority 100 ...
>
> Cheers,
>
> Gilles
>
> On Thursday, June 30, 2016, Saliya Ekanayake wrote:
>
>> Thank you, Gilles. The reason for digging into intra-node optimizations
>> is that we've implemented several machine learning app
>>>> [titan01:01173] *** Process received signal ***
>>>> [titan01:01173] Signal: Aborted (6)
>>>> [titan01:01173] Signal code: (-6)
>>>> [titan01:01172] [ 0] /usr/lib64/libpthread.so.0(+0xf100)[0x2b7e9596a100]
>>>> [titan01:01172] [ 1] /usr/lib64/libc.so.6(gsignal+0x37)[0x2b7e95fc75f7]
>>>> [titan01:01172] [ 2] /usr/lib64/libc.so.6(abort+0x148)[0x2b7e95fc8ce8]
>>>> [titan01:01172] [ 3]
>>>> /home/gl069/bin/jdk1.7.0_25/jre/lib/amd64/server/libjvm.so(+0x742ac5)[0x2b7e96a95ac5]
>>>> [titan01:01172] [ 4]
>>>> /home/gl069/bin/jdk1.7.0_25/jre/lib/amd64/server/libjvm.so(+0x8a2137)[0x2b7e96bf5137]
>>>> [titan01:01172] [ 5]
>>>> /home/gl069/bin/jdk1.7.0_25/jre/lib/amd64/server/libjvm.so(JVM_handle_linux_signal+0x140)[0x2b7e96a995e0]
>>>> [titan01:01172] [ 6] [titan01:01173] [ 0]
>>>> /usr/lib64/libpthread.so.0(+0xf100)[0x2af694ded100]
>>>> [titan01:01173] [ 1] /usr/lib64/libc.so.6(+0x35670)[0x2b7e95fc7670]
>>>> [titan01:01172] [ 7] [0x2b7e9c86e3a1]
>>>> [titan01:01172] *** End of error message ***
>>>> /usr/lib64/libc.so.6(gsignal+0x37)[0x2af69544a5f7]
>>>> [titan01:01173] [ 2] /usr/lib64/libc.so.6(abort+0x148)[0x2af69544bce8]
>>>> [titan01:01173] [ 3]
>>>> /home/gl069/bin/jdk1.7.0_25/jre/lib/amd64/server/libjvm.so(+0x742ac5)[0x2af695f18ac5]
>>>> [titan01:01173] [ 4]
>>>> /home/gl069/bin/jdk1.7.0_25/jre/lib/amd64/server/libjvm.so(+0x8a2137)[0x2af696078137]
>>>> [titan01:01173] [ 5]
>>>> /home/gl069/bin/jdk1.7.0_25/jre/lib/amd64/server/libjvm.so(JVM_handle_linux_signal+0x140)[0x2af695f1c5e0]
>>>> [titan01:01173] [ 6] /usr/lib64/libc.so.6(+0x35670)[0x2af69544a670]
>>>> [titan01:01173] [ 7] [0x2af69c0693a1]
>>>> [titan01:01173] *** End of error message ***
>>>> ---
>>>> Primary job terminated normally, but 1 process returned
>>>> a non-zero exit code. Per user-direction, the job has been aborted.
>>>> ---
>>>>
>>>> --
>>>> mpirun noticed that process rank 1 with PID 0 on node titan01 exited on
>>>> signal 6 (Aborted).
>>>>
>>>>
>>>> CONFIGURATION:
>>>> I used the ompi master sources from github:
>>>> commit 267821f0dd405b5f4370017a287d9a49f92e734a
>>>> Author: Gilles Gouaillardet
>>>> Date: Tue Jul 5 13:47:50 2016 +0900
>>>>
>>>> ./configure --enable-mpi-java
>>>> --with-jdk-dir=/home/gl069/bin/jdk1.7.0_25 --disable-dlopen
>>>> --disable-mca-dso
>>>>
>>>> Thanks a lot for your help!
>>>> Gundram
>>>>
>>>> ___
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> <http://www.open-mpi.org/community/lists/users/2016/07/29584.php>
>>>> http://www.open-mpi.org/community/lists/users/2016/07/29584.php
>>>>
>>>
>>>
>>>
>>> ___
>>> users mailing listus...@open-mpi.org
>>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/07/29585.php
>>>
>>>
>>>
>>
>> ___
>> users mailing listus...@open-mpi.org
>> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/07/29587.php
>>
>>
>>
>
> ___
> users mailing listus...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
>
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29589.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/07/29590.php
>
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
Indiana University, Bloomington
ssible for OpenMPI to use Infiniband and not TCP?
Is there a way to guarantee that a test is using TCP, but not IB?
Thank you,
saliya
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics and Computing | Digital Science Center
Indiana University, Bloomington
Thank you, but what's mxm?
On Tue, Jul 19, 2016 at 12:52 AM, Nathan Hjelm wrote:
> You probably will also want to run with -mca pml ob1 to make sure mxm is
> not in use. The combination should be sufficient to force tcp usage.
>
> -Nathan
>
> > On Jul 18, 2016, at
for mxm, you need to
>
> - force pml/ob1 (so mtl/mxm cannot be used by pml/cm)
>
> and
>
> - blacklist btl/openib
>
> your mpirun command line looks like this
>
> mpirun --mca pml ob1 --mca btl ^openib ...
>
>
> Cheers,
>
>
> Gilles
> On 7/
this module is and is there a
disadvantage in terms of performance by disabling it?
Thank you,
Saliya
--
Saliya Ekanayake, Ph.D
Applied Computer Scientist
Network Dynamics and Simulation Science Laboratory (NDSSL)
Virginia Tech, Blacksburg
___
users ma
l)
>> that suggested to disable psm as a solution.
>>
>> It worked, but I would like to know what this module is and is there a
>> disadvantage in terms of performance by disabling it?
>>
>> Thank you,
>> Saliya
>>
>> --
>> Saliya Ekana
;main" java.lang.UnsatisfiedLinkError:
/N/u/sekanaya/build/lib/libmpi_java.so.0.0.0:
/N/u/sekanaya/build/lib/libmpi.so.0: undefined symbol: opal_maffinity_setup
Any suggestion on fixing this?
Thank you in advance,
Saliya
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
13, at 10:01 PM, Saliya Ekanayake wrote:
>
> Hi,
>
> I obtained the nightly build openmpi-1.9a1r28881 (on 7/19/13) and built it
> with java enabled using,
>
> ./configure --enable-mpi-java --prefix=/N/u/sekanaya/build
>
> Then I wrote a simple MPI program to print the rank o
appears to hold up for all the MPI implementations of interest. The
> > additional threads referred to are "inside the MPI rank," although I
> suppose
> > additional application threads not involved with MPI are possible.
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
Hi,
I want to contact bunch of strings from MPI processes. For example, say
with 2 processes,
P1 has text "hi"
P2 has text "world"
I have these stored as char arrays in processes. Is there a simple way to
do a reduce operation to concat these?
Thank you,
Saliya
--
S
Thanks Jeff, it solved the problem.
Saliya
On Sat, Nov 9, 2013 at 8:15 PM, Jeff Hammond wrote:
> MPI_{Gather,Allgather}v are appropriate for this. See docs for details.
>
> Jeff
>
> Sent from my iPhone
>
> On Nov 9, 2013, at 6:15 PM, Saliya Ekanayake wrote:
>
>
Hi,
I was using MPI.OBJECT as datatype for custom Java classes, but this is no
longer available. Could you please let me, which datatype should be used
for such cases?
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
Hi,
Just want to verify if sendrecv provides any guarantees as to which
operation (send or receive) happens first. I think it is not, is it?
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
Hi,
Is it possible to use non-primitive types with MPI operations in OpenMPI's
Java binding? At the moment in the trunk I only see Datatypes for primitive
kinds.
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
you must create them with the Datatype
> methods (createVector, createStruct,...). And the buffers that hold the
> data must be arrays of a primitive type or direct buffers.
>
> Regards,
> Oscar
>
>
> Quoting Saliya Ekanayake :
>
> Hi,
>>
>> Is it possi
struct data using direct buffers and avoiding
> serialization. MPI.OBJECT could be implemented in a higher level layer, but
> serialization is very bad for performance...
>
> Regards,
> Oscar
>
> Quoting Saliya Ekanayake :
>
> Thank you Oscar. I was using an earlier n
Thank you, this is exactly what I was hoping for!!
Saliya
On Jan 18, 2014 2:40 AM, "Siegmar Gross" <
siegmar.gr...@informatik.hs-fulda.de> wrote:
> Hi,
>
> > Anyway I wonder if there are some samples illustrating the use
> > of complex structures in OpenMPI
>
> I'm not sure if my small programs a
e correct me here
if I am wrong.
Thank you,
Saliya
On Wed, Nov 27, 2013 at 6:00 AM, George Bosilca wrote:
> Why do you need an order? If you plan to send and receive on the same
> buffer, you should use the MPI constructs for that, namely MPI_IN_PLACE.
>
> George.
>
> On Nov 2
Thank you Jeff for the clarification.
Saliya
On Fri, Feb 14, 2014 at 7:06 AM, Jeff Squyres (jsquyres) wrote:
> On Feb 13, 2014, at 10:59 PM, Saliya Ekanayake wrote:
>
> > Anyway, to answer your question I was trying to do sendrecv in a chain
> where "toSend" and &q
e.org/jira/browse/MAPREDUCE-2911
Also, is there a place I can get more info on this effort?
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
gt;
> On Feb 23, 2014, at 10:42 AM, Saliya Ekanayake wrote:
>
> Hi,
>
> This is to get some info on the subject and not directly a question on
> OpenMPI.
>
> I've Jeff's blog post on integrating OpenMPI with Hadoop (
> http://blogs.cisco.com/performance/resurre
Thank you Ralph. I'll get back to you if I run into issues.
On Mon, Feb 24, 2014 at 12:23 PM, Ralph Castain wrote:
>
> On Feb 24, 2014, at 7:55 AM, Saliya Ekanayake wrote:
>
> This is very interesting. I've been working on getting one of our
> clu
.
Node: 192.168.0.19
This is a warning only; your job will continue, though performance may
be degraded.
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
re installed, or you can turn off the warning
> using "-mca hwloc_base_mem_bind_failure_action silent"
>
>
> On Feb 25, 2014, at 10:32 PM, Saliya Ekanayake wrote:
>
> Hi,
>
> I tried to run an MPI Java program with --bind-to core. I receive the
> following war
#x27;s the disadvantage of not using --bind-to core?
Thank you,
Saliya
On Wed, Feb 26, 2014 at 11:01 AM, Saliya Ekanayake wrote:
> Thank you Ralph, I'll check this.
>
>
> On Wed, Feb 26, 2014 at 10:04 AM, Ralph Castain wrote:
>
>> It means that OMPI didn't get built a
, 2014, at 12:17 PM, Saliya Ekanayake wrote:
>
> I have a followup question on this. In our application we have parallel
> for loops similar to OMP parallel for. I noticed that in order to gain
> speedup with threads I've to set --bind-to none, otherwise multiple threads
> wil
on in OMPI
Thank you,
Saliya
On Wed, Feb 26, 2014 at 4:50 PM, Ralph Castain wrote:
> Sorry, had to run some errands.
>
> On Feb 26, 2014, at 1:03 PM, Saliya Ekanayake wrote:
>
> Is it possible to bind to cores of multiple sockets? Say I have a machine
> with 2 sockets each with
PM, Ralph Castain wrote:
>
> On Feb 26, 2014, at 4:29 PM, Saliya Ekanayake wrote:
>
> I see, so if I understand correctly, the best scenario for threads would
> be to bind 2 procs to sockets as --map-by socket:pe=4 and use 4 threads in
> each proc.
>
>
> Yes, that would be t
ly, outside of that and the mpirun man page, there isn't much available
> yet. I'm woefully far behind on it.
>
>
> On Feb 26, 2014, at 4:47 PM, Saliya Ekanayake wrote:
>
> Thank you Ralph, this is very insightful and I think I can better
> understand performance
l continue, though performance may*
*be degraded.*
Thank you,
Saliya
--
Saliya Ekanayake esal...@gmail.com
Cell 812-391-4914 Home 812-961-6383
http://saliya.org
built.
>
> If that still didn't solve your issue, please send all the information
> listed here:
>
> http://www.open-mpi.org/community/help/
>
>
>
> On Mar 4, 2014, at 6:57 AM, Saliya Ekanayake wrote:
>
> > Hi,
> >
> > In an earlier thread I m
168.0.19
> where the problem happened in your most recent job.
> Sometimes a node is down during a massive package install,
> is forgotten, and never gets updated.
>
> I hope this helps,
> Gus Correa
>
>
> On 03/04/2014 12:15 PM, Saliya Ekanayake wrote:
>
>> I actually
linux-amd64 compressed oops)
# Problematic frame:
# C [libc.so.6+0x7b75b] memcpy+0x15b
--
Saliya Ekanayake esal...@gmail.com
http://saliya.org
a particular
> MPI function that you're using that results in this segv (e.g., perhaps we
> have a specific bug somewhere)?
>
> Can you reduce the segv to a small example that we can reproduce (and
> therefore fix)?
>
>
> On Mar 10, 2014, at 12:05 AM, Saliya Ekanayake
See if that enables
> you to run (this particular component has nothing to do with Java).
>
>
> On Mar 11, 2014, at 1:33 AM, Saliya Ekanayake wrote:
>
> > Just tested that this happens even with the simple Hello.java program
> given in OMPI distribution.
> >
> > I
1 - 100 of 138 matches
Mail list logo