On 2013-11-25, at 9:02 PM, Ralph Castain wrote:
> On Nov 25, 2013, at 5:04 PM, Pierre Jolivet wrote:
>
>>
>> On Nov 24, 2013, at 3:03 PM, Jed Brown wrote:
>>
>>> Ralph Castain writes:
>>>
Given that we have no idea what Homebrew uses, I don't know how we
could clarify/respond.
>
Sent from my iPhone
> On Nov 25, 2013, at 5:04 PM, Pierre Jolivet wrote:
>
>
>> On Nov 24, 2013, at 3:03 PM, Jed Brown wrote:
>>
>> Ralph Castain writes:
>>
>>> Given that we have no idea what Homebrew uses, I don't know how we
>>> could clarify/respond.
>
> Ralph, it is pretty easy to k
Hi Ralph,
Thank you very much for your quick response.
I'm afraid to say that I found one more issuse...
It's not so serious. Please check it when you have a lot of time.
The problem is cpus-per-proc with -map-by option under Torque manager.
It doesn't work as shown below. I guess you can get
On Nov 24, 2013, at 3:03 PM, Jed Brown wrote:
> Ralph Castain writes:
>
>> Given that we have no idea what Homebrew uses, I don't know how we
>> could clarify/respond.
>
Ralph, it is pretty easy to know what Homebrew uses, c.f.
https://github.com/mxcl/homebrew/blob/master/Library/Formula/op
Karl,
What does “mpic++ -show” returns ? It is possible that you are
compiling/linking with “c++”, which will defaults to clang++, while you
compiled OpenMPI with g++.
Since libstdc++ and libc++ have incompatible ABI, that might be why you are
getting a wrong behavior.
Also, it could be worthwhi
Ok, that should have worked. I just double-checked it to me sure.
ct-login1:/lscratch1/hjelmn/ibm/collective hjelmn$ mpirun -np 32 ./bcast
App launch reported: 17 (out of 3) daemons - 0 (out of 32) procs
ct-login1:/lscratch1/hjelmn/ibm/collective hjelmn$
How did you configure Open MPI and what
Hi Natan,
I tried qsub option you
mpirun -np 4 --mca plm_base_strip_prefix_from_node_names= 0 ./cpi
--
There are not enough slots available in the system to satisfy the 4 slots
that were requested by the application:
./cpi
Just talked with our local Cray rep. Sounds like that torque syntax is broken.
You can continue
to use qsub (though qsub use is strongly discouraged) if you use the msub
options.
Ex:
qsub -lnodes=2:ppn=16
Works.
-Nathan
On Mon, Nov 25, 2013 at 01:11:29PM -0700, Nathan Hjelm wrote:
> Hmm, thi
Hmm, this seems like either a bug in qsub (torque is full of serious bugs) or a
bug
in alps. I got an allocation using that command and alps only sees 1 node:
[ct-login1.localdomain:06010] ras:alps:allocate: Trying ALPS configuration
file: "/etc/sysconfig/alps"
[ct-login1.localdomain:06010] ras:
Hi,
I'd like to point out that Cray doesn't run a Work Load Manager (WLM)
on the compute nodes. So if you use PBS or Torque/Moab, your job
ends up on the login node. You have to use something like "aprun"
or "ccmrun" to launch the job on the compute nodes.
Unless "mpirun" or "mpiexec" i
Digging a little deeper by running the code in the lldb debugger, I found that
the stall occurs in a call to init_orte from ompi_mpi_init.c:
356 /* Setup ORTE - note that we are an MPI process */
357 if (ORTE_SUCCESS != (ret = orte_init(NULL, NULL, ORTE_PROC_MPI))) {
358
Here’s the back trace from lldb:
$ )ps -elf | grep hello
1042653210 45231 45230 4006 0 31 0 2448976 2148 - S+
0 ttys0020:00.01 hello_cxx 9:07AM
1042653210 45232 45230 4006 0 31 0 2457168 2156 - S+
0 ttys0020:00.04
Mac OS X 1.9 dropped support for gdb. Please report the output of lldb instead.
Also, can you run “otool -L ./hello_cxx” and report the output.
Thanks,
George.
On Nov 25, 2013, at 15:09 , Meredith, Karl wrote:
> I do have DYLD_LIBRARY_PATH set to the same paths as LD_LIBRARY_PATH. This
I do have DYLD_LIBRARY_PATH set to the same paths as LD_LIBRARY_PATH. This
does not resolve the problem. The code still hangs on MPI::Init().
Another thing I tried is I recompiled openmpi with the debug flags activated:
./configure --prefix=$HOME/tools/openmpi --enable-debug
make
make install
I do have DYLD_LIBRARY_PATH set as well, and I get the same problem:
DYLD_LIBRARY_PATH=/Users/meredithk/tools/openmpi/lib
Here’s the directory listing of /Users/meredithk/tools/openmpi/lib
$ )ls -1
libmca_common_sm.3.dylib*
libmca_common_sm.dylib@
libmca_common_sm.la*
libmpi.1.dylib*
libmpi.dyli
Am 25.11.2013 um 14:25 schrieb Meredith, Karl:
> I do have these two environment variables set:
>
> LD_LIBRARY_PATH=/Users/meredithk/tools/openmpi/lib
On a Mac it should DYLD_LIBRARY_PATH - and there are *.dylib files in your
/Users/meredithk/tools/openmpi/lib?
-- Reuti
> PATH=/Users/meredit
We have occasionally had a problem like this when we set LD_LIBRARY_PATH only.
On OSX you may need to set DYLD_LIBRARY_PATH instead ( set it to the same lib
directory )
Can you try that and see if it resolves the problem?
Si Hammond
Sandia National Laboratories
Remote Connection
-Origin
I do have these two environment variables set:
LD_LIBRARY_PATH=/Users/meredithk/tools/openmpi/lib
PATH=/Users/meredithk/tools/openmpi/bin
Running mpirun seems to work fine with a simple command, like hostname:
$ )mpirun -n 2 hostname
meredithk-mac.corp.fmglobal.com
meredithk-mac.corp.fmglobal.co
18 matches
Mail list logo