Hi Ralph and Brian,
Thanks for the advice, I have checked the permission to /tmp
drwxrwxrwt 19 root root 4096 Jan 18 11:38 tmp
which I think there shouldn't be any problem to create files there, so
option (a) still not work for me.
I tried option (b) which set --tmpdir on command line and
Thanks! That appears to have done it.
Brian
On 1/17/07, Scott Atchley wrote:
On Jan 17, 2007, at 10:45 AM, Brian Budge wrote:
> Hi Adrian -
>
> Thanks for the reply. I have been investigating this further. It
> appears that ssh isn't starting my .zshrc file. This is strange.
You should
You are absolutely correct. Today is just the day for Fortran
issues. :-)
I've filed bug #782 (https://svn.open-mpi.org/trac/ompi/ticket/782)
about this and get it fixed up for v1.2. Thanks for reporting!
On Jan 17, 2007, at 2:04 PM, Tim Campbell wrote:
The interface definition for MP
The interface definition for MPI_Initialized in
ompi/mpi/f90/scripts/mpi-f90-interfaces.h.sh
is incorrect. The script is generating the interface as
subroutine MPI_Initialized(flag, ierr)
integer, intent(out) :: flag
integer, intent(out) :: ierr
end
On Jan 17, 2007, at 10:56 AM, Tim Campbell wrote:
In the 1.2b3 build I notice that the opal* page links are not longer
included. Is this by design? Also, and more importantly, the actual
opalcc.1 man page which the links point to is not copied into the
man1 directory. I trace this to the addi
I just updated our local OpenMPI install from 1.2b2 to 1.2b3 and
discovered that the man page setup is incomplete. Here is a
directory listing of the two installs to show what I mean.
{/common/openmpi/pgi/1.2b2/man/man1}[27]% ls -l
total 99
lrwxrwxrwx 1 tjcamp commoners 8 Jan 11 11:37 m
You are absolutely correct, sir! Thanks for noticing -- we'll get
that fixed up.
On Jan 15, 2007, at 1:44 PM, Bert Wesarg wrote:
Hello,
I think the last sentence for the use of MPI_IN_PLACE in the new
manual
page is wrong:
Use the variable MPI_IN_PLACE as the value of both sendbuf and
On Jan 17, 2007, at 10:45 AM, Brian Budge wrote:
Hi Adrian -
Thanks for the reply. I have been investigating this further. It
appears that ssh isn't starting my .zshrc file. This is strange.
You should check the zsh man page. .zshrc is for interactive logins
only. You may want to use .
errno 13 on Linux is EACCESS. According to the man page, ftruncate()
only retrns errno 13 if the file is owned by another user. I can't
see exactly how this could occur, but you might want to look at /tmp/
and make sure everything in openmpi-sessions-eddie* is owned by user
eddie.
Bri
I would guess that for Open MPI v1.1, we will use more vmem than
MPT. Our strategy early on was get a huge buffer and never run out
of resources. Obviously, that's not a good long term plan ;). We've
scaled this down considerably in v1.2 (now in beta), where we by
default use about 16MB/
On Jan 17, 2007, at 2:39 AM, Gleb Natapov wrote:
Hi Robin,
On Wed, Jan 17, 2007 at 04:12:10AM -0500, Robin Humble wrote:
so this isn't really an OpenMPI questions (I don't think), but you
guys
will have hit the problem if anyone has...
basically I'm seeing wildly different bandwidths over
Hi Adrian -
Thanks for the reply. I have been investigating this further. It appears
that ssh isn't starting my .zshrc file. This is strange.
If I execute
ssh host-0 export
I get only a minimal set of environment variables. One of them is SHELL =
/bin/zsh. There is no LD_LIBRARY_PATH in
Hi Eddie
Open MPI needs to create a temporary file system what we call our ³session
directory² - where it stores things like the shared memory file. From this
output, it appears that your /tmp directory is ³locked² to root access only.
You have three options for resolving this problem:
(a) you
Hi Robin,
On Wed, Jan 17, 2007 at 04:12:10AM -0500, Robin Humble wrote:
>
> so this isn't really an OpenMPI questions (I don't think), but you guys
> will have hit the problem if anyone has...
>
> basically I'm seeing wildly different bandwidths over InfiniBand 4x DDR
> when I use different kern
so this isn't really an OpenMPI questions (I don't think), but you guys
will have hit the problem if anyone has...
basically I'm seeing wildly different bandwidths over InfiniBand 4x DDR
when I use different kernels.
I'm testing with netpipe-3.6.2's NPmpi, but a home-grown pingpong sees
the same
On Tue, Jan 16, 2007 at 05:22:35PM -0800, Brian Budge wrote:
> Hi all -
Hi!
> If I run from host-0:
> > mpirun -np 4 -host host-0 myprogram
>
> I have no problems, but if I run
> >mpirun -np 4 -host host-1 myprogram
> error while loading shared libraries: libSGUL.so: cannot open shared
> object
Dear all,
I have recently installed OpenMPI 1.1.2 on a OpenSSI cluster running Fedora
core 3. I tested a simple hello world mpi program (attached) and it runs ok
as root. However, if I run the same program under normal user, it gives the
following error:
[eddie@oceanus:~/home2/mpi_tut]$ mpirun -
I found that MPT uses a *lot* of vmem for buffering/mem mapping. We
schedule based on requested vmem, so this can be a problem. Do you know
how vmem usage for buffering compares with OpenMPI?
Cheers,
Aaron
-Original Message-
From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.
18 matches
Mail list logo