manpage and on the OpenMPI website, see
http://www.open-mpi.org/faq/?category=tuning#paffinity-defs
I hope this answers your original question.
Jens
> Thank you
>
>
> 2013/1/29 Jens Glaser
> Hi Pradeep,
>
> On Jan 28, 2013, at 11:16 PM, Pradeep Jha wrote:
>>
even if you
start it without mpirun.
However, if you do start it with mpirun, a number "np" of processes is launched
on different cores. Provided your node really has 8 physical CPUs with 8 cores
each and you want your program to utilize all your 64 cores, you should start
it with -np 64.
Jens
es of MPI. Since your kernel calls are async with respect to host
already, all you have to do is asynchronously copy the data between host and
device.
Jens
On Dec 12, 2012, at 6:30 PM, Justin Luitjens wrote:
> Hello,
>
> I'm working on an application using OpenMPI with CUDA and
cudaHostAlloc/cudaFreeHost() (I assume OpenMPI 1.7 will have some level of cuda
support), because
otherwise applications using GPUDirect are not guaranteed to work correctly
with them, that is, they will exhibit undefined behavior.
Jens
On Nov 3, 2012, at 10:41 PM, Jens Glaser wrote:
>
mpi.org/faq/?category=openfabrics#setting-mpi-leave-pinned-1.3.2
Can anyone please explain to me the intricacies of this parameter, and what are
the ramifications/benefits of having this particular default value?
Thanks
Jens
-rank, which should have helped
in this case, or --bind-to-none.
Unfortunately, these options are now gone and I couldn't figure out how to make
it work with the newest version.
Can anyone offer any hints on this?
Thanks,
Jens.
--prefix=/home/it1/glaser/local
--with-tm=/opt/torque --enable-shared
Does anyone have any idea what causes openmpi to select cm by default?
Thanks,
Jens.
up? This seems to work also. Or must one always call
MPI_Comm_create on all processes in comm - as the description says.
Jens Jørgen
-- Josh
2012/1/20 Jens Jørgen Mortensen <mailto:je...@fysik.dtu.dk>>
Hi!
For a long time, I have been calling MPI_Comm_create(comm,
So, I guess I have just been lucky that it has worked for me? Or is it
OK to do what I do?
Jens Jørgen
I already tried this parameter, but I don't see any improvements in the
benchmarks. Additionally while doing further investigations into the
opensm I didn't see the QP requests for other LIDs than the base LIDs.
Regards
Jens
Jeff Squyres wrote:
Yes, check out the btl_openib_m
irail feature, to split traffic across two ports
of one Hca)
The only function I have found, was to enable automatic path migration
over lmc, but this is only for failover, if I remember rightly.
Regards,
Jens
Hi Eugen,
thanks for your answer ... I am beginning to understand - even though I
am not happy with it :)
Greetings
Jens
Eugene Loh schrieb:
> Jens wrote:
>
>> Hi Terry,
>>
>> I would like to run a paraview-server all time on our cluster (even
>> though it is no
result in some kind of
"heating-thread" which is not a nice idea.
Greetings
Jens
Terry Frankcombe schrieb:
> As Eugene said: Why are you desperate for an idle CPU? Is it not
> yielding to other processes?
>
>
> On Mon, 2008-12-08 at 10:01 +0100, Jens wrote:
>&g
Greetings
Jens
Eugene Loh schrieb:
> Douglas Guptill wrote:
>
>> Hi:
>>
>> I am using openmpi-1.2.8 to run a 2 processor job on an Intel
>> Quad-core cpu. Opsys is Debian etch. I am reaonably sure that, most
>> of the time, one process is waiting for results from t
Hi Jeff,
thanks a lot. This fixed a bug in my code.
I already like open-mpi for this :)
Greeting
Jens
Jeff Squyres schrieb:
> These functions do exist in Open MPI, but your code is not quite
> correct. Here's a new version that is correct:
>
> -
> program main
>
I_ATTR_GET(MPI_COMM_WORLD, MPI_IO, attr_val, attr_flag, ierr)
Any ideas ...?
Greetings
Jens
program main
use mpi
implicit none
integer :: ierr, rank, size
integer :: attr_val, attr_flag
call MPI_INIT(ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
call MPI
my problem?
Two other questions what is the
-ras (Resource allocation subsystem): and how can I set this up/what
options to have
pls (Process launch subsystem): and how can I set this up/what options
to have?
Regards Jens
remote node
are not the problems.
(a)Passwordless ssh is setup and all nodes see the same home!
(b)the Open MPI code libraries are located in my home which sees every
node.
mpirun sometimes works with all cpus/nodes of the cluster, but sometimes
it won't and the error described below will occu
ux (suse10) cluster with infiniband conection and
openmpi-1.2a1r10111.
I attach the ompi_info --param all all output, hope it helps.
Regards Jens
MCA mca: parameter "mca_param_files" (current value:
"/home/klosterm/.openmpi/mca-params.conf:/home/pub/OpenFOAM/OpenFOAM-1.
_init.c
at line 49
--
Open RTE was unable to initialize properly. The error occured while
attempting to orte_init(). Returned value -1 instead of ORTE_SUCCESS.
Has anybody an idea what the error might be or how to trag it down?
Regards Jens
.
--
[stokes:11293] [0,0,0] ORTE_ERROR_LOG: Not found in file
rmgr_urm_component.c at line 190
Jens
] ORTE_ERROR_LOG: Not implemented in file
rmgr_urm.c at line 365
[stokes:00740] mpirun: spawn failed with errno=-7
What should I do to track the error or to get rid of it?
Jens
22 matches
Mail list logo