Hi,
Am 08.12.2010 um 21:20 schrieb Ralph Castain:
> Afraid I'm not an x-forwarding expert... :-(
>
> Hopefully someone more knowledgeable can chime in.
>
>
> On Dec 8, 2010, at 12:54 PM, brad baker wrote:
>
>> Ya, I just tested -x as well, and it does indeed set the value of DISPLAY
>> corre
Dear all,
I am confused with the use of MPI derived datatype for classes with
static member. How to create derived datatype for something like
class test{
static const int i=5;
double data[5];
}
Thanks for your help!
Best,
Santosh
Afraid I'm not an x-forwarding expert... :-(
Hopefully someone more knowledgeable can chime in.
On Dec 8, 2010, at 12:54 PM, brad baker wrote:
> Ya, I just tested -x as well, and it does indeed set the value of DISPLAY
> correctly for every process, every time I run it. Unfortunately the disp
Dear Ralph,
Thank you for your reply. I did check the ld_library_path and recompile with
the new version and it worked perfectly.
Thank you again.
Best Regards,
Toan
On Thu, Dec 9, 2010 at 12:30 AM, Ralph Castain wrote:
> That could mean you didn't recompile the code using the new version of
>
Ya, I just tested -x as well, and it does indeed set the value of DISPLAY
correctly for every process, every time I run it. Unfortunately the display
is still not behaving as desired. Sometimes they open, and sometimes they
don't.
I'm currently using openmpi-1.4.1 over infiniband on a Rocks clust
Hi
I am currently testing a demo version of totalview.
I am putting this question here, because the totalview
manual is very sparse on information about OpenMPI.
The first question is how to start totalview with mpirun.
I saw that mpirun has some inbuilt totalview capability.
For debugging:
Ralph Castain wrote:
I know we have said this many times - OMPI made a design decision to poll hard
while waiting for messages to arrive to minimize latency.
If you want to decrease cpu usage, you can use the yield_when_idle option (it
will cost you some latency, though) - see ompi_info --par
Also -
HPC clusters are commonly dedicated to running parallel jobs with exactly
one process per CPU. HPC is about getting computation done and letting a
CPU time slice among competing processes always has overhead (CPU time not
spent on the computation).
Unless you are trying to run extra pr
Dear all,
Now I am studying the openib component, and I find it is really complicated.
Here I have one question to ask, it is as follows:
In the initialization of openib component, the function named setup_qps() is
used. In this function, the following code segments are written:
mca_btl_o
Bonjour,
I have trouble when trying to compile& run IPM on an SGI Altix cluster.
The issue is: this cluster is providing a default SGI MPT
implementation of MPI,
but I want to use a private installation of OpenMPI 1.4.3 instead.
1) When I compile IPM as recommended, everything works fine, bu
I know we have said this many times - OMPI made a design decision to poll hard
while waiting for messages to arrive to minimize latency.
If you want to decrease cpu usage, you can use the yield_when_idle option (it
will cost you some latency, though) - see ompi_info --param ompi all
Or don't se
That could mean you didn't recompile the code using the new version of OMPI.
The 1.4 and 1.5 series are not binary compatible - you have to recompile your
code.
If you did recompile, you may be getting version confusion on the backend nodes
- you should check your ld_library_path and ensure it
Hello,
on win32 openmpi 1.4.3, I have a slave process that reaches this pseudo-code
and then blocks and the CPU usage for that process stays at 25% all the time (I
have a quadcore processor). When I set the affinity to 1 of the cores, that
core is 100% busy because of my slave process.
main()
Dear all,
I am having a problem while running mpirun in OpenMPI 1.5 version. I
compiled OpenMPI 1.5 with BLCR 0.8.2 and OFED 1.4.1 as follows:
./configure \
--with-ft=cr \
--enable-mpi-threads \
--with-blcr=/home/nguyen/opt/blcr \
--with-blcr-libdir=/home/nguyen/opt/blcr/lib \
--prefix=/home/nguy
14 matches
Mail list logo