ent project than Open MPI, but you can certainly ask questions on
> their mailing lists, too.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
>
> From: victor sv
> Sent: Tuesday, May 17, 2022 4:00 AM
> To: Jeff Squyres (jsquyres)
onents. We don't have formal
> documentation of any of them, sorry!
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
>
> From: victor sv
> Sent: Monday, May 16, 2022 1:17 PM
> To: Jeff Squyres (jsquyres)
> Cc: users@lists.open-mpi.org
> Subject: Re: [OMPI users] Netwo
d therefore no commonality is needed (or desired).
>
> Which network and Open MPI transport are you looking to sniff?
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
> ____
> From: users on behalf of victor sv via
> users
> Sent: Sund
Hello,
I would like to sniff the OMPI network traffic from outside the MPI
application.
I was traversing the OpenMPI code and documentation, but I have not found
any central point explaining MPI communications from the network point of
view.
Please, is there any official documentation, or paper,
ed?
Hope to helpful!
BR,
Víctor
2017-07-14 1:34 GMT+02:00 Gregory M. Kurtzer :
> Hi Victor,
>
> The are of ABI compatibility I am referring to is with the container's
> underlying library stack. Meaning that if you link in the libraries
> compiled on the host, a
-memchecker --with-valgrind
--with-mpi-param-check
Best,
Victor.
em.
With best regards,
Victor.
P.s. The output of "ompi_info -- all" is also attached.
==4440== Memcheck, a memory error detector
==4440== Copyright (C) 2002-2012, and GNU GPL'd, by Julian Seward et al.
==4440== Using Valgrind-3.8.1 and LibVEX; rerun with -h for copyright info
==
distribute dynamically linked
binaries? For instance, we can distribute (sale) our serial binaries compiled
by GCC. But we are wondering now about OpenMPI.
Thank you in advance!
With best regards,
Victor.
Hi,
I just will confirm that the issue has been fixed. Specifically, with the
latest OpenMPI v1.8.1a1r31402 we need now 2.5 hrs to complete verification and
that timing is even slightly better compared to v1.6.5 (3hrs).
Thank you very much for your assistance!
With best regards,
Victor.
&g
1.8 because with 1.8 our verification suite takes (6.2 hrs) 2x
times longer to complete compared to 1.6.5 (3 hrs).
With best regards,
Victor.
ompi_info.linux.tar
Description: ompi_info.linux.tar
ompi_info.mac.tar
Description: ompi_info.mac.tar
question is: how many nodes were in your allocation?
2 processes on a single machine running under Ubuntu Linux (laptop), or Mac OS
X (Mac mini).
With best regards,
Victor.
of our program we usually spawn parallel binaries a thousands of times.
Thank you In advance!
Best regards,
Victor.
does it work?
>
>
> On Mar 12, 2014, at 4:07 AM, Victor wrote:
>
> > Hostname no I use lower case, but for some reason while I was
> writing the email I thought that upper case is clearer...
> >
> > The same version of Ubuntu (12.04 x64) is on all nodes and open
3.2014 um 07:37 schrieb Victor:
>
> > I am using openmpi 1.7.4 on Ubuntu 12.04 x64 and I have a very odd
> problem.
> >
> > I have 4 nodes, all of which are defined in the hostfile and in
> /etc/hosts.
> >
> > I can log into each node using ssh and certificate
I "fixed it" by finding the message regarding tree spawn in a thread from
November 2013. When I run the job with -mca plm_rsh_no_tree_spawn 1 the job
works over 4 nodes.
I cannot identify any errors in ssh key setup and since I am only using 4
nodes I am not concerned about somewhat slower launch
I am using openmpi 1.7.4 on Ubuntu 12.04 x64 and I have a very odd problem.
I have 4 nodes, all of which are defined in the hostfile and in /etc/hosts.
I can log into each node using ssh and certificate method from the shell
that is running the mpi job, by sing their name as defined in /etc/hosts
Thanks for your reply. There are some updates, but it was too late last
night to post it.
I now have the AMD/Intel heterogeneous cluster up and running. The initial
problem was that when I installed OpenMPI on the AMD nodes, the library
paths were set to a different location than on the Intel node
eous cluster, and that all I need to do is ensure that all OpenMPI
modules are correctly installed on all nodes.
I need the extra 32 Gb RAM and the AMD nodes bring as I need to validate
our CFD application, and our additional Intel nodes are still not here (ETA
2 weeks).
Thank you,
Victor
2014 20:43, Tim Prince wrote:
>
> On 1/29/2014 11:30 PM, Ralph Castain wrote:
>
>
> On Jan 29, 2014, at 7:56 PM, Victor wrote:
>
> Thanks for the insights Tim. I was aware that the CPUs will choke beyond
> a certain point. From memory on my machine this happens with 5 co
I use htop and topand until now I did not make the connection that each
listed process is actually a thread...
Thus the application that I am running is single threaded. How does that
affect the CPU affinity and rank settings? <-- as mentioned earlier I am a
novice, and very easily confused :-
Thank you for the very detailed reply Ralph. I will try what you say. I
will need to ask the developers to let me know about threading of the main
solver process.
On 30 January 2014 12:30, Ralph Castain wrote:
>
> On Jan 29, 2014, at 7:56 PM, Victor wrote:
>
> Thanks for the ins
machines with 8Gb each on loan and will attempt to make them work along the
existing Intel nodes.
Victor
On 29 January 2014 22:03, Tim Prince wrote:
>
> On 1/29/2014 8:02 AM, Reuti wrote:
>
>> Quoting Victor :
>>
>> Thanks for the reply Reuti,
>>>
>&
Sorry typo. I have dual X5660 not X5560.
http://ark.intel.com/products/47921/Intel-Xeon-Processor-X5660-12M-Cache-2_80-GHz-6_40-GTs-Intel-QPI?q=x5660
On 29 January 2014 21:02, Reuti wrote:
> Quoting Victor :
>
> Thanks for the reply Reuti,
>>
>> There are two machines: N
installed Open-MX and recompiled
OpenMPI to use it. This has resulted in approximately 10% better
performance using the existing GbE hardware.
On 29 January 2014 19:40, Reuti wrote:
> Am 29.01.2014 um 03:00 schrieb Victor:
>
> > I am running a CFD simulation benchmark cavity3d available w
if I run asymmetric number of mpi jobs in each node.
For instance running -np 12 on Node1 is significantly faster than running
-np 16 across Node1 and Node2, thus adding Node2 actually slows down the
performance.
Thanks,
Victor
und any options through ompi_info or via google... Any help will be
greatly appreciated.
Sincerely,
Victor.
RWDI - One of Canada's 50 Best Managed Companies
This communication is intended for the sole use of the party to whom it was
addressed and may contain information that
Hi Ralph,
> -mca orte_abort_non_zero_exit 0
Thank you for the hint. That it is exactly what I need! BTW, does it help if
one of the working node occasionally dies during the MPMD run?
With best regards,
Victor.
there any way to
force mpiexec/mpirun to don't cleanup all processes on error and wait until all
spawned processes either successfully complete or abnormally terminate their
execution?
Thank you in advance!
Victor.
Dear Brian,
thank you very much for your assistance and for the bug fix.
Regards,
Victor.
Since my question unanswered for 4 days, I repeat the original post.
Dear Developers,
I am running into memory problems when creating/allocating MPI's window and its
memory frequently. Below is listed a sample code reproducing the problem:
#include
#include
#define NEL8
#define NTIMES 10
Dear Developers,
I am running into memory problems when creating/allocating MPI's window and its
memory frequently. Below is listed a sample code reproducing the problem:
#include
#include
#define NEL8
#define NTIMES 100
int main (int argc,char *argv[]) {
int i;
doublew[
highly appreciated!
With best regards,
Victor.
The test program is listed below:
/* Simple test for MPI_Accumulate && derived datatypes */
#include
#include
#include
#include
#define NEL 10
#define NAC 2
int main(int argc, char **argv) {
int i, j, rank, nranks;
Hi,
I cannot call MPI::Datatype::Commit() and MPI::Datatype::Get_size()
functions from my program. The error that I receive is the some for both of
them:
"cannot call member function 'virtual void MPI::Datatype::Commit()' without
an object
or
"cannot call member function 'virtual void MPI::Dataty
are highly appreciated
Thank you,
Victor Pomponiu
-
/**
* VecData.h: Interface class for data appearing in vector format.
*/
# include "DistData.h" //Inte
2 of 6 running on el-torito
...
etc.
Any ideas as to why I keep getting wrong numbers for rank and number of
processes?
Greetings from Monterrey, Mexico
--
Victor M. Rosas García
. So
basically, the custom program-prefix did not work for some files.
OpenMPI version 1.2.4.
I can provide more information if needed.
Sincerely,
Victor.
benchmark would give
between a Pentium D and the last generation Sparc
processor.
Victor
--- Don Kerr wrote:
> Victor,
>
> Obviously there are many variables involved with
> getting the best
> performance out of a machine and understanding the 2
> environments you
>
Hi Don,
But as I see you must pay for these debuggers.
Victor
--- Don Kerr wrote:
> Victor,
>
> You are right Prism will not work with Open MPI
> which Sun's ClusterTools
> 7 is based on. But Prism was not available for CT 6
> either. Tot
Pentium D at 2.8GHz. I could
expect that the pentium is 4 time faster, but not 16
times.
I wonder how a Sparc IV would perform.
Victor
--- Don Kerr wrote:
> Additionally, Solaris comes with the IB drivers and
> since the libs are
> there OMPI think
.
Victor
--- Jeff Pummill wrote:
> Victor,
>
> Build the FT benchmark and build it as a class B
> problem. This will run
> in the 1-2 minute range instead of 2-4 seconds the
> CG class A benchmark
> does.
>
>
> Jeff F. Pummill
> S
remark that I am faster on one process
concared to your processor.
Victor
--- Jeff Pummill wrote:
> Perfect! Thanks Jeff!
>
> The NAS Parallel Benchmark on a dual core AMD
> machine now returns this...
> [jpummil@localhost bin]$ mpirun -np
nchmarking program first to see
what happens. If you have a benchmarking program I
would gladly test it.
What is the best way to debug OpenMPI programs?
Until now I ran prism which is part of the
SunClusterTools.
Victor
--- Jeff Pummill wrote:
> Vict
The problem is that my executable file runs on the
Pentium D in 80 seconds on two cores and in 25 seconds
on one core.
And on another Sun SMP machine with 20 processors it
runs perfectly (the problem is perfectly scallable).
Victor Marian
Laboratory of Machine Elements and
Hello,
I have a Pentium D computer with Solaris 10 installed.
I installed OpenMPI, succesfully compiled my Fortran
program, but when giving
mpirun -np 2 progexe
I receive
[0,1,0]: uDAPL on host SERVSOLARIS was unable to find
any NICs.
Another transport will be used instead, although this
may resu
>From what you sent, it appears that Open MPI thinks your processes called
> MPI_Abort (as opposed to segfaulting or some other failure mode). The system
> appears to be operating exactly as it should - it just thinks your program
> aborted the job - i.e., that one or more processes actually called
t ttp://www.capca.ucalgary.ca)
They have some mpi libraries (LAM I beleive) installed, but since they
don't support
Fortran90, I compile my own library. I install it in my home directory
/home/victor/programs. I configure with the following options
F77=ifort FFLAGS='-O2' FC=ifort CC=distcc ./co
46 matches
Mail list logo