ther messages.
Any solution?
alberto
ompi_info:
Open MPI: 1.1.1
Open MPI SVN revision: r11473
Open RTE: 1.1.1
Open RTE SVN revision: r11473
OPAL: 1.1.1
OPAL SVN revision: r11473
Prefix: /usr/local
Configured architect
migrate the
ranks to other processors on run-time execution?
Thank you in advance,
Alberto.
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
th to be used, but I don't seem to get
it working.
Which are the steps to get OMPI to use larger TCP packets length? Is it
possible to reach 13000 bytes instead of the standard 1500?
Thank yo in advance,
Alberto
___
users mailing list
users
I am using version 1.10.6 on archlinux.
The option I should pass to mpirun should then be "-mca btl_tcp_mtu 13000"?
Just to be sure.
Thank you,
Alberto
El 5 may. 2017 16:26, "r...@open-mpi.org" escribió:
> If you are looking to use TCP packets, then you want to set the
s with root permissions in order to communicate with the HW
accelerators. There is no instantiation or use of those files until after
running some functions in the main program, so there should be no problem
or concern with that part.
Thank you in advance,
Alberto
_
is done by a DMA engine that is not
cache-coherent
with the main processor.End of rationale.
I use plenty of nonblocking post-send in my code. Is it really true that
the sender must not access any part of the send buffer not even for
STORES? Or was it a MPI 1.0 issue?
Thanks.
alberto
. Could you please confirm me
that OMPI can handle the LOAD case? And if it cannot handle it, which
could be the consequence? What could happen in the worst of the case
when there is a data race in reading a data?
thanks
alberto
Il 02/08/2010 9.32, Alberto Canestrelli ha scritto:
I believe it
safe?
thank you very much
Alberto
Il 02/08/2010 11.29, Alberto Canestrelli ha scritto:
In the posted irecv case if you are reading from the posted receive
buffer the problem is you may get one of three values:
1. pre irecv value
2. value received from the irecv in progress
3. possibly garbage if
ndard-compliant?
Since it is a pain to double all the variables that I send just because
I am reading them later on! I have to change most of my MPI code.
thanks
alberto
Il 18/08/2010 11.56, Alberto Canestrelli ha scritto:
On Mon, 2010-08-02 at 11:36 -0400, Alberto Canestrelli wrote:
> Thanks
I have simple MPI program that sends data to processor rank 0. The
communication works well but when I run the program on more than 2
processors (-np 4) the extra receivers waiting for data run on > 90%
CPU load. I understand MPI_Recv() is a blocking operation, but why
does it consume so mu
Thanks Torje. I wonder what is the benefit of looping on the incoming
message-queue socket rather than using system I/O signals, like read
() or select().
On Apr 23, 2008, at 12:10 PM, Torje Henriksen wrote:
Hi Alberto,
The blocked processes are in fact spin-waiting. While they don't
their CPU utilization. Going to sleep in a
blocking system call will definitely negatively impact latency.
We have plans for implementing the "spin for a while and then block"
technique (as has been used in other MPI's and middleware layers), but
it hasn't been a high priorit
I would like to run one of my MPI processors to a single core on my
iMac Intel Core Duo system. I'm using release 1.2.4 on Darwin 8.11.1.
It looks like processor affinity is not supported for this kind of
configuration:
$ ompi_info|grep affinity
MCA maffinity: first_use (MCA v1.
Note: I'm running Tiger (Darwin 8.11.1). Things might have changed
with Leopard.
On Apr 23, 2008, at 5:30 PM, Jeff Squyres wrote:
On Apr 23, 2008, at 3:01 PM, Alberto Giannetti wrote:
I would like to run one of my MPI processors to a single core on my
iMac Intel Core Duo system. I
purpose. Why
don't you use blocking and/or signals instead of
that? I think the priority of this task is very high because polling
just wastes resources of the system. On the other hand,
what Alberto claims is not reasonable to me.
Alberto,
- Are you oversubscribing one node which means that yo
On Apr 24, 2008, at 6:56 AM, Ingo Josopait wrote:
I am using one of the nodes as a desktop computer. Therefore it is
most
important for me that the mpi program is not so greedily acquiring cpu
time.
From a performance/usability stand, you could set interactive
applications on higher priori
I am looking to use MPI in a publisher/subscriber context. Haven't
found much relevant information online.
Basically I would need to deal with dynamic tag subscriptions from
independent components (connectors) and a number of other issues. I
can provide more details if there is an interest. A
On Apr 24, 2008, at 9:09 AM, Adrian Knoth wrote:
On Thu, Apr 24, 2008 at 08:25:44AM -0400, Alberto Giannetti wrote:
I am using one of the nodes as a desktop computer. Therefore it is
most important for me that the mpi program is not so greedily
acquiring cpu time.
From a performance
port 0.1.0:2000
Waiting for connections on 0.1.0:2000...
Opened port 0.1.1:2001
Waiting for connections on 0.1.1:2001...
Then the client:
mpirun -np 1 app1 0.1.0:2000
Processor 0 (7933, Sender) initialized
Processor 0 connecting to '0.1.0:2000'
[alberto-giannettis-computer.local:07933] [
pplications
with the same mpirun, with MPMD syntax. However this will have the
adverse effect of having a larger than expected MPI_COMM_WORLD.
Aurelien
Le 26 avr. 08 à 00:31, Alberto Giannetti a écrit :
I want to connect two MPI programs through the MPI_Comm_connect/
MPI_Comm_Accept API.
This is
Check the FAQ section for processor affinity.
On Apr 25, 2008, at 2:27 PM, Roopesh Ojha wrote:
Hello
As a newcomer to the world of openMPI who has perused the faq and
searched
the archives, I have a few questions about how to schedule processes
across
a heterogeneous cluster where some process
I am having error using MPI_Lookup_name. The same program works fine
when using MPICH:
/usr/local/bin/mpiexec -np 2 ./client myfriend
Processor 0 (662, Sender) initialized
Processor 0 looking for service myfriend-0
Processor 1 (664, Sender) initialized
Processor 1 looking for service myfriend-
Linwei, are you running the command as root?
Try using sudo:
# sudo make install
It will ask you for an administrator password.
On Apr 29, 2008, at 3:54 PM, Linwei Wang wrote:
Dear all,
I'm new to MPI... I'm trying to install open MPI on my mac (Leopard)..
But during the installation (with the
In message http://www.open-mpi.org/community/lists/users/
2007/03/2889.php I found this comment:
"The only way to get any
benefit from the MPI_Bsend is to have a progress thread which take
care of the pending communications in the background. Such thread is
not enabled by default in Open MPI."
hat.
>
> This is cleaned up in the upcoming 1.3 release and should work much
> smoother.
>
> Ralph
>
>
>
> On 4/27/08 6:58 PM, "Alberto Giannetti"
> wrote:
>
> > I am having error using MPI_Lookup_name. The same program works fine
> > when usi
Is MPE part of OMPI? I can't find any reference in the FAQ.
I need to log application-level messages on disk to trace my program
activity. For better performances, one solution is to dedicate one
processor to the actual I/O logging, while the other working
processors would trace their activity through non-blocking, string
message sends:
/* LOGGER
On May 7, 2008, at 1:32 PM, Barry Rountree wrote:
On Wed, May 07, 2008 at 12:33:59PM -0400, Alberto Giannetti wrote:
I need to log application-level messages on disk to trace my program
activity. For better performances, one solution is to dedicate one
processor to the actual I/O logging
On May 7, 2008, at 5:45 PM, Barry Rountree wrote:
On Wed, May 07, 2008 at 01:51:03PM -0400, Alberto Giannetti wrote:
On May 7, 2008, at 1:32 PM, Barry Rountree wrote:
On Wed, May 07, 2008 at 12:33:59PM -0400, Alberto Giannetti wrote:
I need to log application-level messages on disk to
On May 9, 2008, at 8:03 AM, Jeff Squyres wrote:
On May 9, 2008, at 12:58 AM, Mukesh K Srivastava wrote:
What is the tentatives release dates for OpenMPI-v1.3.1?
Please don't CC both mailing lists on future replies to this thread;
one or the other would be fine; thanks!
Brad Benton and Geo
e PC so that it calls the
cross-compiler specified with 'CC' as well as all MPI flags, generating a
program with static-linking that would run on the arm embedded processors.
Thank you in advance,
Alberto
___
users mailing list
users@lists.open
that Intel SandyBridge-based CPUs have any
particularity with respect to virtual memory handling that
cause PARDISO MKL to SIGSEV once MPI_INIT has been called.
Could u please help me to find the root-cause of this issue?
Thanks in advance.
Best regards,
Alberto.
--
Alberto F. Martín-Huertas
Centre Int
WCHAN=hrtime, and it looks that it is running, but really it doesn´t work
Do you know anything about this problem??? We have other program that have the
same problem…
We launch our program with slurm, srun –mpi=pmix
____
Angelines Alberto
?
Angelines Alberto Morillas
Unidad de Arquitectura Informática
Despacho: 22.1.32
Telf.: +34 91 346 6119
Fax: +34 91 346 6537
skype: angelines.alberto
CIEMAT
Avenida Complutense, 40
28040 MADRID
34 matches
Mail list logo