On Mon, Jan 28, 2013 at 9:20 PM, Brian Budge wrote:
> I believe that yes, you have to compile enable-mpi-thread-multiple to
> get anything other than SINGLE.
>
I just tested that compiling with enable-opal-multi-threads also makes
MPI_Init_thread return MPI_THREAD_FUNNELED.
Does enable-opal-mult
Hi
now I can use all our machines once more. I have a problem on
Solaris 10 x86_64, because the mapping of processes doesn't
correspond to the rankfile. I removed the output from "hostfile"
and wrapped around long lines.
tyr rankfiles 114 cat rf_ex_sunpc
# mpiexec -report-bindings -rf rf_ex_sunpc
Hi,
This is the users mailing list. There is a separate one for questions
related to Open MPI development - de...@open-mpi.org.
Besides, why don't you open a ticket in the Open MPI Trac at
https://svn.open-mpi.org/trac/ompi/ and post there patches against trunk? My
experience shows that even simp
Did you verified that your icpc works properly? Can you compile other C++
applications with icpc?
It might be that your version of icpc isn't supported with that version of
gcc.
I've found a ticket where a similar problem was reported:
https://svn.open-mpi.org/trac/ompi/ticket/3077
The solution
Hi all,
Please advise me how to force our users to use pbs instead of "mpirun
--hostfile"? Or how do I control mpirun so that any user using "mpirun
--hostfile" will not overload the cluster? We have OpenMPI installed
with Torque/Maui and we can control users's limits (total number of
procs, total
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Am 05.02.2013 um 11:24 schrieb Duke Nguyen:
> Please advise me how to force our users to use pbs instead of "mpirun
> --hostfile"? Or how do I control mpirun so that any user using "mpirun
> --hostfile" will not overload the cluster? We have OpenMPI i
To add to what Reuti said, if you enable PBS support in Open MPI, when users
"mpirun ..." in a PBS job, Open MPI will automatically use the PBS native
launching mechanism, which won't let you run outside of the servers allocated
to that job.
Concrete example: if you qsub a job and are allocated
This is a bit late in the thread, but I wanted to add one more note.
The functionality that made it to v1.6 is fairly basic in terms of C/R
support in Open MPI. It supported a global checkpoint write, and (for a
time) a simple staged option (I think that is now broken).
In the trunk (about 3 year
Siegmar --
We've been talking about this offline. Can you send us an lstopo output from
your Solaris machine? Send us the text output and the xml output, e.g.:
lstopo > solaris.txt
lstopo solaris.xml
Thanks!
On Feb 5, 2013, at 12:30 AM, Siegmar Gross
wrote:
> Hi
>
> now I can use all ou
Lart your users. Its the only way.
they will thank you for it it, eventually.
www.catb.org/jargon/html/L/LART.html
On 02/05/2013 08:52 AM, Jeff Squyres (jsquyres) wrote:
To add to what Reuti said, if you enable PBS support in Open MPI, when users "mpirun
..." in a PBS job, Open MPI will automatically use the PBS native launching
mechanism, which won't let you run outside of the servers allocated to that jo
On 02/05/13 00:30, Siegmar Gross wrote:
now I can use all our machines once more. I have a problem on
Solaris 10 x86_64, because the mapping of processes doesn't
correspond to the rankfile. I removed the output from "hostfile"
and wrapped around long lines.
tyr rankfiles 114 cat rf_ex_sunpc
# m
On 02/05/13 13:20, Eugene Loh wrote:
On 02/05/13 00:30, Siegmar Gross wrote:
now I can use all our machines once more. I have a problem on
Solaris 10 x86_64, because the mapping of processes doesn't
correspond to the rankfile.
A few comments.
First of all, the heterogeneous environment had n
On Feb 5, 2013, at 2:18 PM, Eugene Loh wrote:
> Sorry for the dumb question, but who maintains this code? OMPI, or upstream
> in the hwloc project? Where should the fix be made?
The version of hwloc in the v1.6 series is frozen at a somewhat-older version
of hwloc (1.3.2, which was the end o
14 matches
Mail list logo