Hi!
I’ve just learned about the water crisis and thought you would be interested to
check out this story:
https://waterforward.charitywater.org/et/sZpgm3Nn
Thanks,
Mudassar
--
Sent via WaterForward, an initiative of charity: water
WaterForward, 387 Tehama Street, San Francisco, CA 94103,
Come on http://joel-caserus.info/abc.news.php?abusiness=65b5
Dear MPI people,
I am working on a graph partitioning problem,
where we have an undirected graph of p MPI processes. The edges have weights
that show how much communication processes do among themselves. The cluster has
multiple nodes (each node with 8 cores) and
Dear people,
Let say there are N MPI processes. Each MPI process has
to communicate with some T processes, where T < N. This information is a
directed graph (and every process knows only about its own). I need to convert
it to undirected graph, so that each process will
,
Mudassar
From: Jeff Squyres
To: Open MPI Users
Cc: Mudassar Majeed
Sent: Friday, June 1, 2012 4:52 PM
Subject: Re: [OMPI users] Intra-node communication
...and exactly how you measured. You might want to run a well-known benchmark,
like NetPIPE or the
may be it is not installed on our supercomputing center. What do you suggest ?
best regards,
- Forwarded Message -
From: Mudassar Majeed
To: Jeff Squyres
Sent: Friday, June 1, 2012 5:03 PM
Subject: Re: [OMPI users] Intra-node communication
Here is the code, I am
Dear MPI people,
Can someone tell me why MPI_Ssend takes more
time when two MPI processes are on same node .. ?? the same two processes
on different nodes take much less time for the same message exchange. I am
using a supercomputing center and this happens.
assar Majeed
Cc: Open MPI Users
Sent: Tuesday, May 22, 2012 1:58 PM
Subject: Re: [OMPI users] Need MPI algorithms, please help
On May 22, 2012, at 2:35 AM, Mudassar Majeed wrote:
> The algorithms can be like implementation of sorting
>algorithms using OpenMPI. Secondly, c
need to name atleast one so
that I can pursue.
best regards,
From: Jeff Squyres
To: Mudassar Majeed ; Open MPI Users
Sent: Tuesday, May 22, 2012 1:45 AM
Subject: Re: [OMPI users] Need MPI algorithms, please help
You haven't really stated what ki
Dear MPI people,
I need a set of algorithms for
calculating the same thing using different distributed (MPI) algorithms. The
algorithms may need the different data distribution and their execution times
are sensitive to the problem size, number of processe
Dear people,
I am using MPI over a supercomputing center. I don't have
access to install OpenMPI again with enabling valgrind. I need to check the
memory leaks in my application. How can I see in which line of my code of MPI
application there is memory leak ??? Supercomput
No, I am using MPI_Ssend and MPI_Recv everywhere.
regards,
Mudassar
From: Jeff Squyres
To: Mudassar Majeed ; Open MPI Users
Cc: "anas.alt...@gmail.com"
Sent: Monday, November 28, 2011 3:05 PM
Subject: Re: [OMPI users] Deadlock at MPI_FInaliz
Dear people,
In my MPI application, all the processes call the
MPI_Finalize (all processes reach there) but the rank 0 process could not
finish with MPI_Finalize and the application remains running. Please suggest
what can be the cause of that.
regards,
Mudassar
I am still looking for it :(
thanks and regards,
Mudassar Majeed
PhD Student
Linkoping University
PhD Topic: Parallel Computing (Optimal composition of parallel programs and
runtime support).
From: Jeff Squyres
To: mudassar...@yahoo.com; Open MPI Users
What if two processes Pi and Pj send message to each other at the same time ?
Will both block in your suggested code ?
if not then I can go for that. BTW, I have tried that before.
regards,
From: Lukas Razik
To: Mudassar Majeed ; "us...@open-mp
Dear people,
I have a scenario as shown below, please tell me if it
is possible or not
--
while(!IsDone)
{
// some code here
MPI_Irecv( .. );
// some code here
MPI_Iprobe( ., &is_there_a_m
I know about tnıs functıons, they special requirements like the mpi_irecv call
should be made in every process. My processes should not look for messages or
implicitly receive them. But messages shuddering go into their msg queues and
retrieved when needed. Just like udp communication.
Regards
Dear all,
I want to use MPI_Send just like UDP messaging. Let say I have
100 MPI processes such that any MPI process can send message to any other MPI
process and the messages get added in the queue and when that process performs
the receive operation it simply gets the message
tion
To: Open MPI Users
Message-ID: <811ffdfc-c3b6-4bf7-9e53-95c0b572f...@cisco.com>
Content-Type: text/plain; charset=us-ascii
On Nov 10, 2011, at 11:30 AM, Mudassar Majeed wrote:
> For example there are 10 nodes, and each node contains 20 cores. We will have
> 200 cores in total and let
be achieved (to
achieve balance in load and communications). I need your suggestions in these
regards,
thanks and best regards,
Mudassar
From: Josh Hursey
To: Open MPI Users
Cc: Mudassar Majeed
Sent: Thursday, November 10, 2011 5:11 PM
Subject: Re: [OMPI
istic. That's why I want to see if
it is possible to migrate a process from one core to another or not. Then I
will see how good my heuristic will be.
thanks
Mudassar
From: Jeff Squyres
To: Mudassar Majeed ; Open MPI Users
Cc: Ralph Castain
Sent: Thursday
processes per core.
I can explain you the complete problem if you want.
regards,
Mudassar
From: Ralph Castain
To: Mudassar Majeed ; Open MPI Users
Sent: Thursday, November 10, 2011 1:57 PM
Subject: Re: [OMPI users] Process Migration
I'm not sure what you me
Dear MPI community,
Please inform me if it is possible to
migrate MPI processes among the nodes or cores. By note I mean a machine having
multiple cores. So the cluster can have several nodes and each node can have
several cores. I want to know if it is t
Dear MPI people,
I have a vector class with template as follows,
template
class Vector
It is a wrapper on the STL vector class. The element type is T that will be
replaced by the actual instantiated type on the runtime. I have not seen any
support in C++ templ
Thank you for very useful tool for me.
regards,
Mudassar
From: Edgar Gabriel
To: Mudassar Majeed ; Open MPI Users
Sent: Thursday, October 27, 2011 6:20 PM
Subject: Re: [OMPI users] Want to find LogGP parameters. Please help
you can have a look at the Netgauge
Dear MPI people,
I want to use LogGP model with MPI to find a
message with K bytes will take how much time. In this, I need to find Latency
L, Overhead o and Gap G. Can somebody tell me how can I measure these three
parameters of the underlying network ? and h
computation whether the
data is reached or not then it will operate on that data. Someone please inform
me how can I accomplish this.
regards,
Mudassar Majeed.
LinkedIn
Mudassar Majeed requested to add you as a connection on LinkedIn:
--
Mohan,
I'd like to add you to my professional network on LinkedIn.
- Mudassar
Accept invitation from Mudassar Majeed
http://www.linkedin.com/e/k
From: Terry Dontje
To: Jeff Squyres
Cc: Mudassar Majeed ; Open MPI Users
Sent: Saturday, July 16, 2011 5:25 AM
Subject: Re: [OMPI users] Urgent Question regarding, MPI_ANY_SOURCE.
I have to agree with Jeff, we really need a complete program to really debug
this
>> Neither ...!!
P7 >> I could reach here ...!!
P14 >> I could reach here ...!!
P1 >> Received from P7, packet contains rank: 11
P1 >> I could reach here ...!!
P9 >> I could reach here ...!!
P2 >> Received from P11, packet contains rank: 13
P2 >
nd it sends the message to "rec_rank" that was
displayed before sending the message. But on the receiver side the MPI_SOURCE
comes to be wrong.
This shows to me that messages on the receiving sides are captured on the basis
of MPI_ANY_SOURCE, that seems like it does not see the destinat
would start by either adding debugging
printf's to your code to trace the messages. Or narrowing down the
code to a small kernel such that you can prove to yourself that MPI is
working the way it should and if not you can show us where it is going
wrong.
--td
On 7/15/2011 6:51 AM, Mudassar
tus.MPI_SOURCE will contain the rank of the actual sender not the
receiver's rank. If you are not seeing that then there is a bug somewhere.
--td
On 7/14/2011 9:54 PM, Mudassar Majeed wrote:
> Friend,
> I can not specify the rank of the sender. Because only
> the sen
gards,
Mudassar
From: Jeff Squyres
To: Mudassar Majeed
Cc: Open MPI Users
Sent: Friday, July 15, 2011 3:30 AM
Subject: Re: [OMPI users] Urgent Question regarding, MPI_ANY_SOURCE.
Right. I thought you were asking about receiving *another* message from
whomeve
wants to
receive only that message which was sent for this receiver. But when it
captures with source as MPI_ANY_SOURCE and MPI_ANY_TAG, the receiver will
capture any message (even not targetted for it).
regards,
Mudassar
From: Jeff Squyres
To: Mudassar Majeed
argument as B does not know about the sender. What
should I do in this situation ?
regards,
Mudassar Majeed
36 matches
Mail list logo