Indeed, my terminology is inexact. I believe you are correct; our
diskless nodes use tmpfs, not ramdisk. Thanks for the clarification!
On 11/4/11 11:00 AM, Rushton Martin wrote:
There appears to be some confusion about ramdisks and tmpfs. A ramdisk
sets aside a fixed amount of memory for its
I just checked my laptop (also running Lion) and I do have gcc at /usr/bin and
it is linked to /usr/bin/gcc-4.2. I just checked again on my Mac Pro and there
is no gcc in /usr/bin although there is a /usr/bin/gcc-3.3, probably left over
from an earlier OS or Xcode. I downloaded and installed X
Thanks, Ralph,
> Having a local /tmp is typically required by Linux for proper operation as
> the OS itself needs to ensure its usage is protected, as was > previously
> stated and is reiterated in numerous books on managing Linux systems.
There is a /tmp, but it's not local. I don't know if
I think you have something wrong with your Xcode install; on my Lion
machine, gcc is installed in /usr/bin as always. Also, on OS X, you
should never have to set LD_LIBRARY_PATH.
Brian
On 11/4/11 3:36 PM, "Ralph Castain" wrote:
>Just glancing at the output, it appears to be finding a different
Just glancing at the output, it appears to be finding a different gcc that
isn't Lion compatible. I know people have been forgetting to clear out all
their old installed software, and so you can pick old things up.
Try setting your path and ld_library_path variables to point at the Xcode gcc.
I had downloaded and installed OpenMPI on my Mac OS-X 10.6 machine a few months
ago. I ran the configure and install commands from the FAQ with no problems.
I recently upgraded to Max OS-X 10.7 (Lion) and now when I run mpicc it cannot
find the standard C library headers (stdio.h, std lib.h…)
There appears to be some confusion about ramdisks and tmpfs. A ramdisk
sets aside a fixed amount of memory for its exclusive use, so that a
file being written to ramdisk goes first to the cache, then to ramdisk,
and may exist in both for some time. tmpfs however opens up the cache
to programs so
I should have been more careful. When we first started using OpenMPI,
version 1.4.1, there was a bug that caused session directories to be
left behind. This was fixed in subsequent releases (and via a patch
for 1.4.1).
Our batch epilogue still removes everything in /tmp that belongs to the
owne
On Nov 4, 2011, at 10:19 AM, Blosch, Edwin L wrote:
> OK, I wouldn't have guessed that the space for /tmp isn't actually in RAM
> until it's needed. That's the key piece of knowledge I was missing; I really
> appreciate it. So you can allow /tmp to be reasonably sized, but if you
> aren't ac
OK, I wouldn't have guessed that the space for /tmp isn't actually in RAM until
it's needed. That's the key piece of knowledge I was missing; I really
appreciate it. So you can allow /tmp to be reasonably sized, but if you aren't
actually using it, then it doesn't take up 11 GB of RAM. And yo
I wasn't advocating against having the epilogue per se, but was more
curious if there was some issue going on that we did not know about. If
there isn't an issue then great.
--td
On 11/4/2011 9:59 AM, Ralph Castain wrote:
That isn't the situation, Terry. We had problems with early OMPI
relea
That isn't the situation, Terry. We had problems with early OMPI releases,
particularly the 1.2 series. In response, the labs wrote an epilogue to ensure
that the session directories were removed. Executing the epilogue is now
standard operating procedure, even though our more recent releases do
Sorry for the delay in replying.
I think you need to use MPI_INIT_THREAD with a level of MPI_THREAD_MULTIPLE
instead of MPI_INIT. This sets up internal locking in Open MPI to protect
against multiple threads inside the progress engine, etc.
Be aware that only some of Open MPI's transports are
Sorry for the delay in replying.
We don't have any formal documentation written up on this stuff, in part
because we keep optimizing and changing the exact makeup of wire protocols, etc.
If you have any specific questions, we can try to answer them for you.
On Oct 21, 2011, at 2:45 PM, ramu wro
After some discussion on the devel list, I opened
https://svn.open-mpi.org/trac/ompi/ticket/2904 to track the issue.
On Oct 25, 2011, at 12:08 PM, Ralph Castain wrote:
> FWIW: I have tracked this problem down. The fix is a little more complicated
> then I'd like, so I'm going to have to ping s
We really need more information in order to help you. Please see:
http://www.open-mpi.org/community/help/
On Nov 3, 2011, at 7:37 PM, amine mrabet wrote:
> i instaled last version of openmpi now i have this error
> I
> t seems that [at least] one of the processes that was started with
> m
David, are you saying your jobs consistently leave behind session files
after the job exits? It really shouldn't even in the case when a job
aborts, I thought, mpirun took great pains to cleanup after itself.
Can you tell us what version of OMPI you are running with? I think I
could see ki
% df /tmp
Filesystem 1K-blocks Used Available Use% Mounted on
- 12330084822848 11507236 7% /
% df /
Filesystem 1K-blocks Used Available Use% Mounted on
- 12330084822848 11507236 7% /
That works out to 11GB. But..
18 matches
Mail list logo