Hello,
I'm a lam user looking to switch to openmpi, and having trouble
compiling. I hope someone here can give me a hint!
So configure works fine, as does make up to this point:
g++ -O3 -DNDEBUG -fno-inline -pthread -o .libs/ompi_info components.o
ompi_info.o output.o param.o version.o -Wl,-
It would be nice if the c++ compiler wrapper were
installed under mpicxx, mpiCC, and mpic++ instead of
just the latter 2.
Also on the fantasy wish list is for the libraries to
be installed in libtool form (unless you go away from autotools
altogether).
Ben Allan
snl/ca
Just a brief response on two points (lest the 'insiders' think
there are no sympathetic outsiders...).
On Wed, Jun 15, 2005 at 01:09:27PM -0400, Jeff Squyres wrote:
>
> Although we have not made a final decision yet, given that community
> involvement is a *strong* goal of this project, we've ac
bin/ompi_info presents an opportunity to help all us shlubs that
have to do gnu build systems.
It appears it could be extended to include useful bits of info
that are normally classed as build magic.
e.g. gnome-config, xml2-config, etc, etc.
I see lam-config was debated at least briefly back in 2
;- works
mpirun -np 2 ./myapp <- works
mpirun -np 2 --host myhost ./myapp<- does not work
I already configured ssh, so that I don't have to enter a password.
I am using the OpenMPI Version 1.8.1 on both machines.
I uploaded all required files, I hope you can help me...
Regar
64011,0],0]
[CUDAServer:04970] [[64011,0],1] CLOSING SOCKET 9
Regards
Benjamin Giehle
Thanks for your help!
Regards
Benjamin Giehle
How are we meant to free memory allocated with MPI_Win_allocate()? The
following crashes for me with OpenMPI 1.10.6:
#include
#include
#include
int main(int argc, char **argv) {
MPI_Init(&argc, &argv);
int n = 1000;
int *a;
MPI_Win win;
MPI_Win_allocate(n*sizeof(int), sizeof(int),
MPI_Accumulate() is meant to be non-blocking, and MPI will block until
completion when an MPI_Win_flush() is called, correct?
In this (https://hastebin.com/raw/iwakacadey) microbenchmark,
MPI_Accumulate() seems to be blocking for me in OpenMPI 1.10.6.
I'm seeing timings like
[brock@nid00622 junk
Is there any way to issue simultaneous MPI_Accumulate() requests to
different targets, then? I need to update a distributed array, and this
serializes all of the communication.
Ben
On Thu, May 4, 2017 at 5:53 AM, Marc-André Hermanns <
m.a.herma...@fz-juelich.de> wrote:
> Dear Benjami
x-gnu/3.4.6/crtend.o
/usr/lib/gcc/x86_64-linux-gnu/3.4.6/../../../../lib/crtn.o
/usr/lib/gcc/x86_64-linux-gnu/3.4.6/../../../../lib/libfrtbegin.a(frtbegin.o):
dans la fonction ▒ main ▒:
(.text+0x1e): référence indéfinie vers ▒ MAIN__ ▒
collect2: ld a retourné 1 code d'état d'exécution
Thanks,
Benjamin
s were
bigger than before.
I'll get back to you with more info when I'll be able to fix my connexion
problem to the cluster...
Thanks,
Benjamin
2010/12/3 Martin Siegert
> Hi All,
>
> just to expand on this guess ...
>
> On Thu, Dec 02, 2010 at 05:40:53PM -0500, Gus Correa
necessary.
Futhermore MPI_SEND and MPI_RECEIVE are called a dozen times in only one
source file (used for passing a data structure from one node to another) and
it has proved to be working in every situtation.
Not knowing which line is causing my segfault is annoying. [?]
Regards,
Benjamin
2010/12/6
uble-8".
I'd like to stress that in both cases MPI_INTEGER size is 4-bytes long.
I'll follow my own intuition and Jeff's advice that is using the same flags
for compiling openmpi as compiling DRAGON.
Thanks,
Benjamin
I always recommend using the same flags for compiling OMPI as
(Santiago)
Is it a bad use of the framework or could it be a bug ?
Thank you in advance.
Benjamin
,
--
Benjamin Bouvier
De : users-boun...@open-mpi.org [users-boun...@open-mpi.org] de la part de Jeff
Squyres [jsquy...@cisco.com]
Date d'envoi : vendredi 8 juin 2012 16:30
À : Open MPI Users
Objet : Re: [OMPI users] Bug when mixing sent types in version 1.6
On
get their opinion about it.
Now that I know the program doesn't work both with OMPI and MPICH2
implementations, I guess it's not dependant of MPI implementation.
If you have any ideas or comments, I would be pleased to hear them.
--
Benjamin Bouvier
I do `netstat -a | grep node2` from node1. However, the program
keeps blocking.
What else could provoke that failure ?
--
Benjamin BOUVIER
To start, I would ensure that all firewalling (e.g., iptables) is disabled on
all machines involved.
On Jun 11,
node1,node2,node3 ring_c" : blocks at same point that
mentioned above in case of 3 hosts.
I recompiled this test program with MPICH2 and have the exactly same issues at
the same time.
There is really something wrong with that network...
--
Benjamin Bouvier
nk you one more time.
--
Benjamin Bouvier
> What's the output from ifconfig on all nodes?
>
>--
>Jeff Squyres
>jsquy...@cisco.com
>For corporate legal information go to:
>http://www.cisco.com/web/about/doing_business/legal/cri/
I'm using ClusterTools 8.2.1 on Solaris 10 and according to the HPC
docs,
"Open MPI includes a commented default hostfile at
/opt/SUNWhpc/HPC8.2/etc/openmpi-default-hostfile. Unless you
specify
a different hostfile at a different location, this is the hostfile
that OpenMPI uses."
I have added my
In trying to track down my default hostfile problem, I found that
when I run ompi_info, it simply keeps repeating:
Displaying Open MPI information for 32-bit ...
Displaying Open MPI information for 32-bit ...
Displaying Open MPI information for 32-bit ...
Displaying Open MPI information for 32-bit
What's the proper way to use shmem_int_fadd() in OpenMPI's SHMEM?
A minimal example seems to seg fault:
#include
#include
#include
int main(int argc, char **argv) {
shmem_init();
const size_t shared_segment_size = 1024;
void *shared_segment = shmem_malloc(shared_segment_size);
int *
> What version of Open MPI are you trying to use?
Open MPI 2.1.1-2 as distributed by Arch Linux.
> Also, could you describe something about your system.
This is all in shared memory on a MacBook Pro; no networking involved.
The seg fault with the code example above looks like this:
[xiii@shini
I'd like to run Open MPI on a cluster of RISC-V machines. These machines
are pretty weak cores and so I need to cross-compile. I'd like to do this:
Machine 1, which is x86_64-linux-gnu, compiles programs for machine 2.
Machine 2, which is riscv64-unknown-linux, will run these programs.
It seem
> try removing the --target option.
With the configure line
./configure --host=riscv64-unknown-linux --enable-static --disable-shared
--prefix=/home/ubuntu/src/ben-build/openmpi
It successfully configures, but I now get the error
/home/xiii/Downloads/openmpi-3.0.0/opal/.libs/libopen-pal.a(patch
I have the same error with
./configure --host=riscv64-unknown-linux --build=x86_64-linux-gnu
--enable-static
--disable-shared --prefix=/home/ubuntu/src/ben-build/openmpi
Ben
On Sat, Dec 16, 2017 at 4:50 PM, Benjamin Brock
wrote:
> > try removing the --target option.
>
> With t
Yeah, I just noticed that Open MPI was giving me all x86_64 binaries with
the configuration flags
./configure --host=riscv64-unknown-linux --enable-static --disable-shared
--disable-dlopen --enable-mca-no-build=patcher-overwrite
--prefix=/home/ubuntu/src/ben-build/openmpi
and was very confused.
Recently, when I try to run something locally with OpenMPI with more than
two ranks (I have a dual-core machine), I get the friendly message
--
There are not enough slots available in the system to satisfy the 3 slots
that wer
Recently, when I try to run something locally with OpenMPI with more than
two ranks (I have a dual-core machine), I get the friendly message
--
There are not enough slots available in the system to satisfy the 3 slots
that wer
How can I run an OpenSHMEM program just using shared memory? I'd like to
use OpenMPI to run SHMEM programs locally on my laptop.
I understand that the old SHMEM component (Yoda?) was taken out, and that
UCX is now required. I have a build of OpenMPI with UCX as per the
directions on this random
Here's what I get with those environment variables:
https://hastebin.com/ibimipuden.sql
I'm running Arch Linux (but with OpenMPI/UCX installed from source as
described in my earlier message).
Ben
___
users mailing list
users@lists.open-mpi.org
https://
Are MPI datatypes like MPI_INT and MPI_CHAR guaranteed to be compile-time
constants? Is this defined by the MPI standard, or in the Open MPI
implementation?
I've written some template code where MPI datatypes are constexpr members,
which requires that they be known at compile time. This works in
Thanks for the responses--from what you've said, it seems like MPI types
are indeed not guaranteed to be compile-time constants.
However, I worked with the people at IBM, and it seems like the difference
in behavior was caused by the IBM compiler, not the Spectrum IBM
implementation.
Ben
I'm setting up a cluster on AWS, which will have a 10Gb/s or 25Gb/s
Ethernet network. Should I expect to be able to get RoCE to work in Open
MPI on AWS?
More generally, what optimizations and performance tuning can I do to an
Open MPI installation to get good performance on an Ethernet network?
Thanks for your response.
One question: why would RoCE still requiring host processing of every
packet? I thought the point was that some nice server Ethernet NICs can
handle RDMA requests directly? Or am I misunderstanding RoCE/how Open
MPI's RoCE transport?
Ben
In case anyone comes across this thread in an attempt to get RDMA over
Ethernet working on AWS, here's the conclusion I came to:
There are two kinds of NICs exposed to VMs on AWS:
- Intel 82599 VF
- This NIC is old and does not support RoCE or iWARP.
- It's a virtualized view of an actu
I used to be able to (e.g. in Open MPI 3.1) put the line
rmaps_base_oversubscribe = true
in my `openmpi-mca-params.conf`, and this would enable oversubscription by
default. In 4.0.0, it appears that this option doesn't work anymore, and I
have to use `--oversubscribe`.
Am I missing something, o
Hello,
I am new at using open-mpi and will like to know something basic.
What is the equivalent of the "mpif.h" in open-mpi which is normally
"included" at
the beginning of mpi codes (fortran in this case).
I shall appreciate that for cpp as well.
Thanks
Ben
Palen
> www.umich.edu/~brockp <http://www.umich.edu/%7Ebrockp>
> Center for Advanced Computing
> bro...@umich.edu
> (734)936-1985
>
>
>
>
> On Oct 30, 2008, at 10:33 AM, Benjamin Lamptey wrote:
>
> Hello,
>> I am new at using open-mpi and will like
lamptey/projectb/src/blag_real_burnmpi.f90
Error: Can't open included file 'mpif.h'
make: *** [blag_real_burnmpi.o] Error 1
xxx
5) What are people's experience in this case?
Thanks
Ben
On Thu, Oct 30, 2008 at 2:33 PM, Benjamin Lamptey wrote:
> Hello,
> I am new at using
Hi us...@open-mpi.org,
I set up a Facebook profile where I can post my pictures, videos and events and
I want to add you as a friend so you can see it. First, you need to join
Facebook! Once you join, you can also create your own profile.
Thanks,
Benjamin
To sign up for Facebook, follow the
I've been using the oldish (2003) mpijava of late.
It holds up pretty well with modern mpis, but certain
jvms persist in causing extra copies, using SEGV as
a means of process control, etc.
If you don't need "true" sun java compatibility, you
can also use gcj (gcc suite) or titanium (berkeley)
in
fine
* My ompi-output.tar.gz file can be found here: http://www.stolaf.edu/
people/landstei/ompi-output.tar.gz
Thanks,
Ben Landsteiner
+----+
Benjamin Landsteiner
St. Olaf College
lands...@stolaf.edu
hine. Not sure if it's
necessary or not.
8. Go back to the v1.1 directory. Type 'make clean', then
reconfigure, then recompile and reinstall
9. Things should work now.
Thank you Michael,
~Ben
++
Benjamin Landsteiner
lands...@stolaf.edu
On 2006/06/26, a
Manav,
You may also wish to consult the man or info pages for your
particular flavor of gcc regarding the interpretation of
-ansi. There may be more specific alternatives that check
whatever flavor of ISO compliance is important to you.
Unfortunately, the mpi specification was written before
int3
mething obvious.
Here is some system information:
SUSE 10.1
g77 fortran compiler (does the same thing with gfortran)
openmpi 1.1.1
Attached is the output from both attempts, with the -Nx400 and
without.
Benjamin Gaudio
blacstester-output.tar.gz
Description: Binary data
I have a code that runs with both Portland and Intel compilers on
X86, AMD64 and Intel EM64T running various flavors of Linux on clusters.
I am trying to port it to a 2-CPU Itanium2 (ia64) running Red Hat
Enterprise Linux 4.0; it has gcc 3.4.6-8 and the Intel Fortran compiler
10.0.026 installe
on the Itanium2.
Ted (more responses below)
On November 7, 2007 at 8:39 AM, Squyres, Jeff wrote:
On Nov 5, 2007, at 4:12 PM, Benjamin, Ted G. wrote:
>> I have a code that runs with both Portland and Intel
compilers
>> on X86, AMD64 and Intel EM64T running
I get the following error when trying to run SHMEM programs using UCX.
[xiii@shini dir]$ oshrun -n 1 ./target/debug/main
[1556046469.890238] [shini:19769:0]sys.c:619 UCX ERROR
shmget(size=2097152 flags=0xfb0) for mm_recv_desc failed: Operation not
permitted, please check shared memor
And, to provide more details, I'm using a fresh vanilla build of Open MPI
4.0.1 with UCX 1.5.1 (`./configure --with-ucx=$DIR/ucx-1.5.1`).
Ben
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users
51 matches
Mail list logo