On Jan 30, 2008, at 2:36 AM, Martin Horvat wrote:
(1) First I would like to clarify the problem connected to open-mpi.
It was compiled with intel suit:
ifort --version
ifort (IFORT) 10.0 20070613
Copyright (C) 1985-2007 Intel Corporation. All rights reserved.
I build and test with the inte
On Jan 30, 2008, at 5:35 PM, Adam C Powell IV wrote:
With no reply in a couple of weeks, I'm wondering if my previous
message
got dropped. (Then again, my previous message was a couple of weeks
late in replying to its predecessor...)
No, it didn't get dropped -- it was exactly your admissio
Hello,
With no reply in a couple of weeks, I'm wondering if my previous message
got dropped. (Then again, my previous message was a couple of weeks
late in replying to its predecessor...)
I'm recommending a change to mpi.h which would let C headers included by
C++ programs do:
#define OMPI_SKIP_
Yes, I had already realized, sorry about the question.
It seems however, that for some MPI implementations (LAM in
particular) a NULL pointer is assumed to be like a pointer to
MPI_REQUEST_NULL, and my program ran fine with them. I have corrected
my program and all works fine with Open MPI. Thanks
I think you are mixing up two different things here: a NULL pointer is
invalid, and thus Open MPI has to raise an error. If a request is
MPI_REQUEST_NULL, that's perfectly legal according to the standard.
However MPI_REQUEST_NULL is not a NULL pointer, its a well defined value.
Francisco Jesús
Hello Users,
the man page for MPI_Waitany states that
"The array_of_requests list *may contain null* or inactive handles. If
the list contains no active handles (list has length zero or all
entries are null or inactive), then the call returns immediately with
index = MPI_UNDEFINED, and an empty
On Wed, Jan 30, 2008 at 09:13:28AM -0500, Sang Chul Choi wrote:
> I am wondering which version of open mpi I should install. I am using the
> latest version of Ubuntu. Is debian package 1.1-2.5 the relatively latest
> version of open mpi?
No -- and a simple way of getting selected packages from De
Jeff, thank your for your suggestion, I am sure that the correct mpif.h is
being included. One
thing that I did not do in my original message was submit the job to SGE. I did
that and the
program still failed with the same seg fault messages.
Below is the output of the job submitted to SGE.
On Jan 30, 2008, at 10:05 AM, Thomas Ropars wrote:
Sorry, I made a mistake ... I works fine for me too with LT 1.5.22
It seems that the problem is with LT 1.5.24
With this version, I only have in my sys_dl_open():
lt_module module = dlopen (filename, LT_LAZY_OR_NOW);
Yoinks! LT changed t
Jeff Squyres wrote:
On Jan 30, 2008, at 4:43 AM, Thomas Ropars wrote:
After running autogen.sh, the file opal/libltdl/loaders/dlopen.c
doesn't
exist and more generally the directory opal/libltdl/loaders/ doesn't
exist.
That's why I need to add the RTLD_GLOBAL flag after running
autogen.
Great! Thanks.
Sang Chul
On Jan 30, 2008 9:50 AM, Jeff Squyres wrote:
> Ok. But note that "mpicc --version" does not show *Open MPI's*
> version; it will show the version of the underlying compiler.
>
> You probably want to run ompi_info to see information about your Open
> MPI installation (t
Ok. But note that "mpicc --version" does not show *Open MPI's*
version; it will show the version of the underlying compiler.
You probably want to run ompi_info to see information about your Open
MPI installation (to include its version).
On Jan 30, 2008, at 9:45 AM, Sang Chul Choi wrote:
Hi,
It was a broken package in the latest ubuntu (gutsy). Here is the solution.
https://bugs.launchpad.net/ubuntu/gutsy/+source/openmpi/+bug/152273
Thanks,
Sang Chul
On Jan 30, 2008 9:31 AM, Jeff Squyres wrote:
> See:
>
> http://www.open-mpi.org/faq/?category=mpi-apps#wrapper-showme-with-no-f
I'm getting many "Source and destination overlap in memcpy" errors when
running my application on an odd number of procs.
I believe this is because the Allgather collective is using Bruck's
algorithm and doing a shift on the buffer as a finalisation step
(coll_tuned_allgather.c):
tmprecv = (char
See:
http://www.open-mpi.org/faq/?category=mpi-apps#wrapper-showme-with-no-file
On Jan 30, 2008, at 9:26 AM, Sang Chul Choi wrote:
Hi,
I installed ubuntu package openmpi (debian package version 1.1-2.5,
ubuntu version is the latest one) and I tried to run mpicc to see
its version
$ mpi
Hi,
I installed ubuntu package openmpi (debian package version 1.1-2.5, ubuntu
version is the latest one) and I tried to run mpicc to see its version
$ mpicc --version
It does not show anything. Do you have any idea what is wrong? I appreciate
any help.
Thank you,
Sang Chul
On Wed, Jan 30, 2008 at 09:13:28AM -0500, Sang Chul Choi wrote:
> Hi,
Hi!
> latest version of Ubuntu. Is debian package 1.1-2.5 the relatively latest
> version of open mpi?
http://packages.debian.org/openmpi
There's 1.2.5-1, which is also the current official release.
HTH
--
Cluster and Me
Hi,
I am wondering which version of open mpi I should install. I am using the
latest version of Ubuntu. Is debian package 1.1-2.5 the relatively latest
version of open mpi?
Thank you,
Sang Chul
On Jan 30, 2008, at 4:43 AM, Thomas Ropars wrote:
After running autogen.sh, the file opal/libltdl/loaders/dlopen.c
doesn't
exist and more generally the directory opal/libltdl/loaders/ doesn't
exist.
That's why I need to add the RTLD_GLOBAL flag after running
autogen.sh.
I'm using the foll
After running autogen.sh, the file opal/libltdl/loaders/dlopen.c doesn't
exist and more generally the directory opal/libltdl/loaders/ doesn't exist.
That's why I need to add the RTLD_GLOBAL flag after running autogen.sh.
I'm using the following version of the autotools.
autoconf (GNU Autoconf)
Dear Ompi users,
I am new to HPC, but I am helping a friend to compile and run WRF (Weather
research and forcasting) on our simple cluster: Intel Xeon PCs connected via 1G
ethernet.
(1) First I would like to clarify the problem connected to open-mpi. It was
compiled with intel suit:
ifort --v
21 matches
Mail list logo