Hi Jeff,
Jeff Squyres wrote:
On Mar 13, 2009, at 6:17 AM, Raymond Wan wrote:
What doesn't work is:
[On Y] mpirun --host Y,Z --np 2 uname -a
[On Y] mpirun --host X,Y,Z --np 3 uname -a
...and similarly for machine Z. I can confirm that from any of the 3
Do you see "rsh" or "ssh" in the o
On 03/13/09 16:40, Jeff Squyres wrote:
On Mar 13, 2009, at 4:37 PM, Mostyn Lewis wrote:
>From config.log
configure:21522: checking for C/C++ restrict keyword
configure:21558: pgcc -c -DNDEBUG -fast -Msignextend -tp p7-64
conftest.c >&5
configure:21564: $? = 0
configure:21582: result: restri
On Fri, Mar 13, 2009 at 9:28 PM, Jeff Squyres wrote:
> No you should not need to do this.
>
> Is there any chance you could upgrade to Open MPI v1.3?
Yes. It works without a Barrier under v1.3. Is this a known problem?
What is the best way for me to test in my configure script that I'm
running
On Mar 13, 2009, at 6:47 AM, Ricardo Fernández-Perea wrote:
In the same machine the same job takes a lot more time while using
XGrid than while using any other method even all the orted run in the
same node when using Xgrid it use tcp instead of sm is that expected
or do I have a problem.
Thi
Hello,
I have compiled ompi and another program for use on another rhel5/x86_64
machine, after transfering the binaries and setting up environment variables is
there anything else I need to do for ompi to run properly? When executing my
prog I get:
-
On Mar 13, 2009, at 4:37 PM, Mostyn Lewis wrote:
>From config.log
configure:21522: checking for C/C++ restrict keyword
configure:21558: pgcc -c -DNDEBUG -fast -Msignextend -tp p7-64
conftest.c >&5
configure:21564: $? = 0
configure:21582: result: restrict
So you only check using pgcc (not p
From config.log
configure:21522: checking for C/C++ restrict keyword
configure:21558: pgcc -c -DNDEBUG -fast -Msignextend -tp p7-64 conftest.c >&5
configure:21564: $? = 0
configure:21582: result: restrict
So you only check using pgcc (not pgCC)?
DM
On Fri, 13 Mar 2009, Jeff Squyres wrote:
No you should not need to do this.
Is there any chance you could upgrade to Open MPI v1.3?
On Mar 12, 2009, at 12:14 PM, Mikael Djurfeldt wrote:
I should add that the problem disappears if I add a line
MPI::COMM_WORLD.Barrier ()
just before the loop which frees the intercommunicators.
I
On Mar 13, 2009, at 6:17 AM, Raymond Wan wrote:
What doesn't work is:
[On Y] mpirun --host Y,Z --np 2 uname -a
[On Y] mpirun --host X,Y,Z --np 3 uname -a
...and similarly for machine Z. I can confirm that from any of the
3 machines, I can ssh to the other without typing in a password. I
On Mar 13, 2009, at 2:42 PM, Amos Leffler wrote:
Thanks for your advice. I went back carefully through my PATH
file and corrected that so that I compiled openmpi-1.2.9 with the
Intel compilers seemingly without errors. However, the simple test
examples wont run with the same error:
~/Des
I had an off-list discussion about this issue with a colleague at PGI.
I think the issue is this: apparently, "restrict" is different in C
than it is in C++. The Autoconf built-in AC_C_RESTRICT test *only*
tests the C compiler. The particular file you are compiling is C++
(components.cc),
Well George's syntax didn't work, either:
"../../../.././ompi/mca/op/op.h", line 263: error: expected a ")"
typedef void (*ompi_op_base_3buff_handler_fn_1_0_0_t)(void *restrict in1,
^
"../../../.././ompi/mca/op/op.h", line
Hi Josh,
Thanks for your advice. I went back carefully through my PATH
file and corrected that so that I compiled openmpi-1.2.9 with the
Intel compilers seemingly without errors. However, the simple test
examples wont run with the same error:
~/Desktop/openmpi-1.2.9/examples> mpicc hello_c.
Hmmm...your comments don't sound like anything relating to Open MPI.
Are you sure you are not using some other MPI?
Our mpiexec isn't a script, for example, nor do we have anything named
I_MPI_PIN_PROCESSOR_LIST in our code.
:-)
On Mar 13, 2009, at 4:00 AM, Peter Teoh wrote:
I saw the fo
Mark Potts wrote:
> All,
>I don't know PGI's compilers, but is it possible that since "restrict"
>was supposedly introduced as a C99 feature that it is not supported
>by default by their C compiler? This would explain the wording of
>the error message which indicates interpretation
I was able to compile 1.3.0 with PGI 8.0-3 on January 27th. If that
helps anyone.
--
Prentice
George Bosilca wrote:
> Apparently, the PGI compiler (version 8) doesn't recognize restrict as a
> keyword in a function prototype if the associated argument is not named.
> There is one obvious solutio
In the same machine the same job takes a lot more time while using
XGrid than while using any other method even all the orted run in the
same node when using Xgrid it use tcp instead of sm is that expected
or do I have a problem.
Ricardo
FWIW, It compiles with PGI 7.2 on RHEL4U7
[acaird@nyx-login1 ~]$ ompi_info | grep "compiler abs"
C compiler absolute: /usr/caen/pgi-7.2/linux86-64/7.2-1/bin/pgcc
C++ compiler absolute: /usr/caen/pgi-7.2/linux86-64/7.2-1/bin/pgCC
Fortran77 compiler abs: /usr/caen/pgi-7.2/linux86-64/7.2
Hi all,
I'm having a problem running mpirun and I was wondering if there are
suggestions on how to find out the cause. I have 3 machines that I can use:
X, Y, and Z. The important thing is that X is different from Y and Z (the
software installed, version of Linux, etc. Y and Z are identic
I saw the following problem posed somewhere - can anyone shed some
light? Thanks.
I have a cluster of 8-sock quad core systems running Redhat 5.2. It
seems that whenever I try to run multiple MPI jobs to a single node
all the jobs end up running on the same processors. For example, if I
were to
20 matches
Mail list logo