From a pretty old experiment I made, compression was giving good
results on 10Mbps network but was actually decreasing RTT on 100Mbs
and more. I played with all the zlib settings from 1 to 9, and
actually even the low compression setting was unable to reach decent
performance. I don't belie
Hi,
On 10:03 Thu 24 Apr , Steven Truong wrote:
> Could somebody tell me what might cause this error?
I'll try.
> [compute-1-27:31550] *** Process received signal ***
> [compute-1-27:31550] Signal: Segmentation fault (11)
> [compute-1-27:31550] Signal code: Address not mapped (1)
"Address n
Tamer,
I'm confident that this particular problem is now fixed in the trunk
(r18276). If you are interested in the details on the bug and how it
was fixed the commit message is fairly detailed:
https://svn.open-mpi.org/trac/ompi/changeset/18276
Let me know if this patch fixes things. Like
On Thu, Apr 24, 2008 at 11:17:30AM -0400, George Bosilca wrote:
> Well, blocking or not blocking this is the question !!! Unfortunately, it's
> more complex than this thread seems to indicate. It's not that we didn't
> want to implement it in Open MPI, it's that at one point we had to make a
> c
I just wanted to add my last comment since this discussion seems to be
very hot! As Jeff
mentioned while a process is waiting to receive a message it doesn't
really matter if it uses
blocking or polling. What I really meant was that blocking can be useful
to use CPU cycles to
handle other calcul
Jeff,
I don't know if it there is a way to capture the "not of required
architecture" response and add it to the error message. I agree that
the current error message captures the problem in broad terms and
points to the config.log file. It is just not very specific. If the
architecture p
Hi. I recently encountered this error and can not really understand
what this means. I googled and could not find any relevant
information. Could somebody tell me what might cause this error?
Our systems: Rocks 4.3 x86_64, openmpi-1.2.5, scalapack-1.8.0,
Barcelona, Gigabit interconnections.
T
On Apr 24, 2008, at 12:24 PM, George Bosilca wrote:
There are so many special errors that are compiler and operating
system dependent that there is no way to handle each of them
specifically. And even if it was possible, I will not use autoconf
if the resulting configure file was 100MB ...
There are so many special errors that are compiler and operating
system dependent that there is no way to handle each of them
specifically. And even if it was possible, I will not use autoconf if
the resulting configure file was 100MB ...
Additionally, I think the error message is more than
On Apr 24, 2008, at 9:09 AM, Adrian Knoth wrote:
On Thu, Apr 24, 2008 at 08:25:44AM -0400, Alberto Giannetti wrote:
I am using one of the nodes as a desktop computer. Therefore it is
most important for me that the mpi program is not so greedily
acquiring cpu time.
From a performance/usabili
Jeff,
For the specific problem of the gcc compiler creating i386 objects
and ifort creating x86_64 objects, in the config.log file it says
configure:26935: ifort -o conftest conftest.f conftest_c.o >&% ld:
warning in conftest_c.o, file is not of required architecture
If configure could pi
I have never tested this before, so I could be wrong. However, my best guess
is that the following is happening:
1. you trap the signal and do your cleanup. However, when your proc now
exits, it does not exit with a status of "terminated-by-signal". Instead, it
exits normally.
2. the local daemon
Barry Rountree schrieb:
> On Thu, Apr 24, 2008 at 12:56:03PM +0200, Ingo Josopait wrote:
>> I am using one of the nodes as a desktop computer. Therefore it is most
>> important for me that the mpi program is not so greedily acquiring cpu
>> time.
>
> This is a kernel scheduling issue, not an Op
What George said is what I meant by "it's a non-trivial amount of
work." :-)
In addition to when George adds these patches (allowing components to
register for blocking progress), there's going to be some work to deal
with shared memory (we have some ideas here, but it's a bit more than
j
On Apr 24, 2008, at 11:07 AM, Doug Reeder wrote:
Make sure that your compilers are all creaqting code for the same
architecture (i386 or x86-64). ifort usually installs such that the
64 bit version of the compiler is the dfault while the apple gcc
compiler creates i386 output by default. Ch
Well, blocking or not blocking this is the question !!! Unfortunately,
it's more complex than this thread seems to indicate. It's not that we
didn't want to implement it in Open MPI, it's that at one point we had
to make a choice ... and we decided to always go for performance first.
Howeve
Make sure that your compilers are all creaqting code for the same
architecture (i386 or x86-64). ifort usually installs such that the
64 bit version of the compiler is the dfault while the apple gcc
compiler creates i386 output by default. Check the architecture of
the .o files with file *.
Actually, even in this particular condition (over internet)1
compression make sense only for very specific data. The problem is
that usually the compression algorithm is very expensive if you want
to really get a interesting factor of size reduction. And there is the
tradeoff, what you save
Tamer,
Another user contacted me off list yesterday with a similar problem
with the current trunk. I have been able to reproduce this, and am
currently trying to debug it again. It seems to occur more often with
builds without the checkpoint thread (--disable-ft-thread). It seems
to be a
Josh, Thank you for your help. I was able to do the following with
r18241:
start the parallel job
checkpoint and restart
checkpoint and restart
checkpoint but failed to restart with the following message:
ompi-restart ompi_global_snapshot_23800.ckpt
[dhcp-119-202.caltech.edu:23650] [[45699,1],
You probably want to use all the intel compilers, not just ifort.
CC=icc
CXX=icpc
FC=ifort
F77=ifort
-jms
Sent from my PDA. No type good.
-Original Message-
From: Koun SHIRAI [mailto:k...@sanken.osaka-u.ac.jp]
Sent: Thursday, April 24, 2008 08:09 AM Eastern Standard Time
To: Op
Additionally, the mpi-t spec has some accept/connect examples in the dynamic
processes chapter.
-jms
Sent from my PDA. No type good.
-Original Message-
From: Tim Prins [mailto:tpr...@open-mpi.org]
Sent: Thursday, April 24, 2008 09:33 AM Eastern Standard Time
To: Open MPI Users
On Thu, Apr 24, 2008 at 12:56:03PM +0200, Ingo Josopait wrote:
> I am using one of the nodes as a desktop computer. Therefore it is most
> important for me that the mpi program is not so greedily acquiring cpu
> time.
This is a kernel scheduling issue, not an OpenMPI issue. Busy waiting in
one p
Hello, all -
I have an OpenMPI application that generates a file while it runs. No
big deal. However, I'd like to delete the partial file if the job is
aborted via a user signal. In a non-MPI application, I'd use sigaction
to intercept the SIGTERM and delete the open files there. I'd then
Open MPI ships with a full set of man pages for all the MPI functions,
you might want to start with those.
Tim
Alberto Giannetti wrote:
I am looking to use MPI in a publisher/subscriber context. Haven't
found much relevant information online.
Basically I would need to deal with dynamic tag su
On Thu, Apr 24, 2008 at 08:25:44AM -0400, Alberto Giannetti wrote:
> > I am using one of the nodes as a desktop computer. Therefore it is
> > most important for me that the mpi program is not so greedily
> > acquiring cpu time.
> From a performance/usability stand, you could set interactive
> a
On Apr 24, 2008, at 8:26 AM, Tomas Ukkonen wrote:
Yes, you are probably right that its not worth the effort in general
and
especially not in HPC environments where you have very fast network.
But I can think of (rather important) special cases where it is
important
- non HPC environments
I am looking to use MPI in a publisher/subscriber context. Haven't
found much relevant information online.
Basically I would need to deal with dynamic tag subscriptions from
independent components (connectors) and a number of other issues. I
can provide more details if there is an interest. A
George Bosilca wrote:
> The paper you cited, while presenting a particular implementation
> doesn't bring present any new ideas. The compression of the data was
> studied for long time, and [unfortunately] it always came back to the
> same result. In the general case, not worth the effort !
>
> Now
On Apr 24, 2008, at 6:56 AM, Ingo Josopait wrote:
I am using one of the nodes as a desktop computer. Therefore it is
most
important for me that the mpi program is not so greedily acquiring cpu
time.
From a performance/usability stand, you could set interactive
applications on higher priori
Dear Sir:
I think that this problem must be solved, and maybe some information
should be given in the archives. But, I miss the right answer in my
searching area, so please allow me to repeat.
I tried to install openmpi-1.2.5 to a new xserve (Xeon) with Leopard.
Intel compiler is used
Jeff Squyres wrote:
> On Apr 22, 2008, at 9:03 AM, Tomas Ukkonen wrote:
>
>> I read from somewhere that OpenMPI supports
>>
>> some kind of data compression but I couldn't find
>> any information about it.
>>
>> Is this true and how it can be used?
>>
> Nope, sorry -- not true.
>
> This jus
I am using one of the nodes as a desktop computer. Therefore it is most
important for me that the mpi program is not so greedily acquiring cpu
time. But I would imagine that the energy consumption is generally a big
issue, since energy is a major cost factor in a computer cluster. When a
cpu is idl
Dear all,
to explain the behavior of MPI_Reduce on our cluster i ran through the
source of Open MPI 1.2.6. On line 357 i found a mistake (maybe ;-)). It
should be:
return ompi_coll_tuned_reduce_intra_binary(sendbuf, recvbuf, count,
datatype, op, root, comm, segsize);
instead of
return ompi_col
34 matches
Mail list logo