Muhammad,
Our configuration of TCP is tailored for 1Gbs networks, so it’s performance on
10G might be sub-optimal. That being said, the remaining of this email will be
speculation as I do not have access to a 10G system to test it.
There are two things that I would test to see if I can improve
The Java bindings are written on top of the C bindings, so you'll be able
to use those networks just fine from Java :-)
On Wed, Apr 16, 2014 at 2:27 PM, Saliya Ekanayake wrote:
> Thank you Nathan, this is what I was looking for. I'll try to build
> OpenMPI 1.8 and get back to this thread if I
Thank you Nathan, this is what I was looking for. I'll try to build OpenMPI
1.8 and get back to this thread if I run into issues.
Saliya
On Wed, Apr 16, 2014 at 5:19 PM, Nathan Hjelm wrote:
> You do not need CCM to use Open MPI on with Gemini and Aries. Open MPI
> has natively supported both n
You do not need CCM to use Open MPI on with Gemini and Aries. Open MPI
has natively supported both networks since 1.7.0. Please take a look at
the platform files in contrib/platform/lanl/cray_xe6 for CLE 4.1
support. You should be able to just build using:
configure --with-platform=contrib/platfor
I see. Also, I wanted to build OpenMPI because the provided OpenMPI didn't
have Java binding. It seems at this point the only option is to use TCP in
CCM in BigRed 2 and if I remember correctly Mason and Quarry don't have IB
as well, correct?
Thank you,
Saliya
On Wed, Apr 16, 2014 at 5:01 PM,
Hello,
Big Red 2 provides its own MPICH based MPI. The only case where the
provided OpenMPI module becomes relevant is when you create a CCMLogin
instance in Cluster Compatibility Mode (CCM). For most practical uses,
those sorts of needs are better addressed on the Quarry or Mason machines.
Hi,
We have a Cray XE6/XK7 supercomputer (BigRed II) and I was trying to get
OpenMPI Java binding working on it. I couldn't find a way to utilize its
Gemini interconnect, instead was running on TCP, which is inefficient.
I see some work has been done along these lines in [1] and wonder if you
cou
Hi,
I committed your patch to the trunk.
thanks
M
On Wed, Apr 16, 2014 at 6:49 PM, Mike Dubman wrote:
> +1
> looks good.
>
>
> On Wed, Apr 16, 2014 at 4:35 PM, Åke Sandgren
> wrote:
>
>> On 04/16/2014 02:25 PM, Åke Sandgren wrote:
>>
>>> Hi!
>>>
>>> Found this problem when building r31409 with
Gus
It is a single machine and i have installed Ubuntu 12.04 LTS. I left my
computer in the college but I will try to follow your advice when I can and
tell you about it.
Thanks
Enviado desde mi iPad
> El 16/04/2014, a las 14:17, "Gus Correa" escribió:
>
> Hi Oscar
>
> This is a long shot
Dan,
On the hosts where the ADIOI lock error occurs, are there any NFS errors in
/var/log/messages, dmesg, or similar that refer to lockd?
--john
-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Daniel Milroy
Sent: Tuesday, April 15, 2014 10:55 AM
To: Op
Hi Oscar
This is a long shot, but maybe worth trying.
I am assuming you're using Linux, or some form or Unix, right?
You may try to increase the stack size.
The default in Linux is often too small for large programs.
Sometimes this may cause a segmentation fault, even if the
program is correct.
Hello,
I am
Flavienne and I am a master student.
I wrote a
script which have to backup sequentials applications with BLCR and parallels
applications with OPEN MPI.
I created
symbolic links of this script to /etc/rc0.d and /etc/rc6.d folders in order to
be executed before boot and reboot processes o
Hello,
I am
Flavienne and I am a master student.
I wrote a
script which have to backup sequentials applications with BLCR and parallels
applications with OPEN MPI.
I created
symbolic links of this script to /etc/rc0.d and /etc/rc6.d folders in order to
be executed before boot and reboot processes
+1
looks good.
On Wed, Apr 16, 2014 at 4:35 PM, Åke Sandgren wrote:
> On 04/16/2014 02:25 PM, Åke Sandgren wrote:
>
>> Hi!
>>
>> Found this problem when building r31409 with Pathscale 5.0
>>
>> pshmem_barrier.c:81:6: error: redeclaration of 'pshmem_barrier_all' must
>> have the 'overloadable' at
On 04/16/2014 08:30 AM, Oscar Mojica wrote:
How would be the command line to compile with the option -g ? What debugger can
I use?
Thanks
Replace any optimization flags (-O2, or similar) by -g.
Check if your compiler has the -traceback flag or similar
(man compiler-name).
The gdb debugger is
On 04/16/2014 02:25 PM, Åke Sandgren wrote:
Hi!
Found this problem when building r31409 with Pathscale 5.0
pshmem_barrier.c:81:6: error: redeclaration of 'pshmem_barrier_all' must
have the 'overloadable' attribute
void shmem_barrier_all(void)
^
../../../../oshmem/shmem/c/profile/defines.h
Thanks Victor! Sorry for the problem, but appreciate you bringing it to our
attention.
Ralph
On Wed, Apr 16, 2014 at 5:16 AM, Victor Vysotskiy <
victor.vysots...@teokem.lu.se> wrote:
> Hi,
>
> I just will confirm that the issue has been fixed. Specifically, with the
> latest OpenMPI v1.8.1a1r3
How would be the command line to compile with the option -g ? What debugger can
I use?
Thanks
Enviado desde mi iPad
> El 15/04/2014, a las 18:20, "Gus Correa" escribió:
>
> Or just compiling with -g or -traceback (depending on the compiler) will
> give you more information about the point of f
Hi Ralph,
Yes, you are right. I should have also tested NetPipe-MPI version earlier.
I ran NetPipe-MPI version on 10G Ethernet and maximum bandwidth achieved is
5872 Mbps. Moreover, maximum bandwidth achieved by osu_bw test is 6080
Mbps. I have used OSU-Micro-Benchmarks version 4.3.
On Wed, Apr 1
Hi!
Found this problem when building r31409 with Pathscale 5.0
pshmem_barrier.c:81:6: error: redeclaration of 'pshmem_barrier_all' must
have the 'overloadable' attribute
void shmem_barrier_all(void)
^
../../../../oshmem/shmem/c/profile/defines.h:193:37: note: expanded from
macro 'shmem_b
Hi,
I just will confirm that the issue has been fixed. Specifically, with the
latest OpenMPI v1.8.1a1r31402 we need now 2.5 hrs to complete verification and
that timing is even slightly better compared to v1.6.5 (3hrs).
Thank you very much for your assistance!
With best regards,
Victor.
>I
I apologize, but I am now confused. Let me see if I can translate:
* you ran the non-MPI version of the NetPipe benchmark and got 9.5Gps on a
10Gps network
* you ran iperf and got 9.61Gps - however, this has nothing to do with MPI.
Just tests your TCP stack
* you tested your bandwidth program on
Yes, I have tried NetPipe-Java and iperf for bandwidth and configuration
test. NetPipe Java achieves maximum 9.40 Gbps while iperf achieves maximum
9.61 Gbps bandwidth. I have also tested my bandwidth program on 1Gbps
Ethernet connection and it achieves 901 Mbps bandwidth. I am using the same
progr
23 matches
Mail list logo