Greetings,
I was hoping someone could help me with the following situation. I have a
program which has no MPI support that I'd like to run "in parallel" by
running a portion of my total task on N CPUs of a PBS/Maui/Open-MPI
cluster. (The algorithm is such that there is no real need for MPI, I a
Excellent catch -- many thanks!
(this code was just updated recently, causing this problem)
On Apr 23, 2007, at 8:38 PM, Mostyn Lewis wrote:
After 1.3a1r14155 (not sure how much after but certainly currently)
you
get a SEGV if you use an unknown shell (I use something called ksh93).
Error
On Apr 23, 2007, at 9:22 PM, Mostyn Lewis wrote:
I tried this on a humble PC and it works there.
I see in the --mca mpi_show_mca_params 1 print out that there is a
[bb17:06646] paffinity=
entry, so I expect that sets the value entry back to 0?
There should be an mpi_paffinity_alone parameter;
Hi John
I'm afraid that the straightforward approach you're trying isn't going to
work with Open MPI in its current implementation. I had plans for supporting
this kind of operation, butnot happening. And as you discovered, you
cannot run mpiexec/mpirun in the background, and the "do-not-wait"
John Borchardt wrote:
I was hoping someone could help me with the following situation. I have a
program which has no MPI support that I'd like to run "in parallel" by
running a portion of my total task on N CPUs of a PBS/Maui/Open-MPI
cluster. (The algorithm is such that there is no real need f
Hello,
On Mon, 23 Apr 2007, Bert Wesarg wrote:
> Hello all,
>
> Instructions:
>
> # assume you have mounted the sysfs on /sys
> $ cd /sys
> $ tar cjf cpu-topology.tar.bz2
> devices/system/{cpu/cpu*/topology/*,node/node*/cpu*}
Because of different kernel version, or old versions, not all files are
I think you actually have a few options:
1. If I'm reading the original mail right, I don't think you need
mpirun/mpiexec at all. When you submit a scripted job to PBS, the
script runs on the "mother superior" node -- the first node that was
allocated to you. In this, case it's your only
Hello,
I finally managed to run open MPI with uDAPl but all MPI programs hang
up, when they use MPI_Recv. If I use TCP or native InfiniBand instead,
it works. Maybe you have an idea where the problem could be.
Thanks
Andreas
config.log.gz
Description: GNU Zip compressed data
ompi_info
Andreas,
I am going to guess at a minimum the interfaces are up and you can ping
them. On Solaris there is an additional step required and that is
initializing the dat registry. If "/usr/sbin/datadm -v" does not show
some driver output then you would need to run "/usr/sbin/datadm -a
/usr/sha
Well, I'm sorry to have caused even a smidgen of grief here.
I moved aside the *paffinity_linux* module and la and it still
bound. I was using InfiniPath HCAs and beta software and eventually found
(sigh) a variable to stop the affine - IPATH_NO_CPUAFFINITY.
So, a
export IPATH_NO_CPUAFFINITY=1
$
No problem! Glad we could help, and many thanks for tracking down
some of our bugs.
On Apr 24, 2007, at 5:28 PM, Mostyn Lewis wrote:
Well, I'm sorry to have caused even a smidgen of grief here.
I moved aside the *paffinity_linux* module and la and it still
bound. I was using InfiniPath HCAs
11 matches
Mail list logo