How many cores does your processor has?
On Wed, Feb 23, 2011 at 8:52 PM, Li Zuwei wrote:
> Dear Users,
>
> I'm measuring barrier synchronization performance on the v1.5.1 build of
> OpenMPI. I am currently trying to measure synchronization performance on a
> single node, with 5 processes. I'm g
Hello. I'm use vps hosting under xen.
Multicast not available, only tcp/udp one to one worked. How can i use
openmpi to send and recive data from many nodes in this environment?
--
Vasiliy G Tolstov
Selfip.Ru
I'm not sure what you're asking. Open MPI should work just fine in a Xen
environment.
If you're unsure about how to use the MPI API, you might want to take a
tutorial to get you familiar with MPI concepts, etc. Google around; there are a
bunch available. My personal favorite is at the UIUC NCS
You should:
- do N warmup barriers
- start the timers
- do M barriers (M should be a lot)
- stop the timers
- divide the time by M
Benchmarking is tricky to get right.
Sent from my PDA. No type good.
On Feb 23, 2011, at 11:54 PM, "Li Zuwei" wrote:
> Dear Users,
>
> I'm measuring barrier sy
On Thu, 2011-02-24 at 06:32 -0500, Jeff Squyres (jsquyres) wrote:
> I'm not sure what you're asking. Open MPI should work just fine in a Xen
> environment.
>
> If you're unsure about how to use the MPI API, you might want to take a
> tutorial to get you familiar with MPI concepts, etc. Google a
In that case, I have a small question concerning design:
Suppose task-based parallellism where one node (master) distributes
work/tasks to 2 other nodes (slaves) by means of an MPI_Put. The master
allocates 2 buffers locally in which it will store all necessary data that
is needed by the slave to
Late yesterday I did have a chance to test the patch Jeff provided
(against 1.4.3 - testing 1.5.x is on the docket for today). While it
works, in that I can specify a gid_index, it doesn't do everything
required - my traffic won't match a lossless CoS on the ethernet
switch. Specifying a GID is o
Hi Toon,
Can you use non-blocking send/recv? It sounds like this will give you
the completion semantics you want.
Best,
~Jim.
On 2/24/11 6:07 AM, Toon Knapen wrote:
In that case, I have a small question concerning design:
Suppose task-based parallellism where one node (master) distributes
But that is what surprises me. Indeed the scenario I described can be
implemented using two-sided communication, but it seems not to be possible
when using one sided communication.
Additionally the MPI 2.2. standard describes on page 356 the matching rules
for post and start, complete and wait and
I'm still not sure what you're asking -- are you asking how to get Open MPI to
work if multicast is disabled in your network?
If so, not to worry; Open MPI doesn't currently use multicast.
On Feb 24, 2011, at 6:39 AM, Vasiliy G Tolstov wrote:
> On Thu, 2011-02-24 at 06:32 -0500, Jeff Squyres (
I personally find the entire MPI one-sided chapter to be incredibly confusing
and subject to arbitrary interpretation. I have consistently advised people to
not use it since the late '90s.
That being said, the MPI one-sided chapter is being overhauled in the MPI-3
forum; the standardization pr
On Feb 24, 2011, at 8:00 AM, Michael Shuey wrote:
> Late yesterday I did have a chance to test the patch Jeff provided
> (against 1.4.3 - testing 1.5.x is on the docket for today). While it
> works, in that I can specify a gid_index,
Great! I'll commit that to the trunk and start the process of
I'm afraid I don't see the problem. Let's get 4 nodes from slurm:
$ salloc -N 4
Now let's run env and see what SLURM_ env variables we see:
$ srun env | egrep ^SLURM_ | head
SLURM_JOB_ID=95523
SLURM_JOB_NUM_NODES=4
SLURM_JOB_NODELIST=svbu-mpi[001-004]
SLURM_JOB_CPUS_PER_NODE=4(x4)
SLURM_JOBID=
Like I said, this isn't an OMPI problem. You have your slurm configured to
pass certain envars to the remote nodes, and Brent doesn't. It truly is just
that simple.
I've seen this before with other slurm installations. Which envars get set
on the backend is configurable, that's all.
Has nothing t
The weird thing is that when running his test, he saw different results with HP
MPI vs. Open MPI.
What his test didn't say was whether those were the same exact nodes or not.
It would be good to repeat my experiment with the same exact nodes (e.g.,
inside one SLURM salloc job, or use the -w pa
If you are trying to use OMPI as the base for ORCM, then you can tell ORCM
to use OMPI's "tcp" multicast module - it fakes multicast using pt-2-pt tcp
messaging.
-mca rmcast tcp
will do the trick.
On Thu, Feb 24, 2011 at 6:27 AM, Jeff Squyres wrote:
> I'm still not sure what you're asking --
On Thu, Feb 24, 2011 at 8:30 AM, Jeff Squyres wrote:
> The weird thing is that when running his test, he saw different results
> with HP MPI vs. Open MPI.
>
It sounded quite likely that HP MPI is picking up and moving the envars
itself - that possibility was implied, but not clearly stated.
>
I'm running OpenMPI v1.4.3 and slurm v2.2.1. I built both with the default
configuration except setting the prefix. The tests were run on the exact same
nodes (I only have two).
When I run the test you outline below, I am still missing a bunch of env
variables with OpenMPI. I ran the extra t
I would talk to the slurm folks about it - I don't know anything about the
internals of HP-MPI, but I do know the relevant OMPI internals. OMPI doesn't
do anything with respect to the envars. We just use "srun -hostlist "
to launch the daemons. Each daemon subsequently gets a message telling it
wha
Hi, all,
I asked for help for a code problem here days ago (
http://www.open-mpi.org/community/lists/users/2011/02/15656.php ).
Then I found that the code can be executed without any issue on other
cluster. So I suspected that there maybe something wrong in my cluster
environment configuration. So
Sorry Ralph, I have to respectfully disagree with you on this one. I believe
that the output below shows that the issue is that the two different MPIs
launch things differently. On one node, I ran:
[brent@node2 mpi]$ which mpirun
~/bin/openmpi143/bin/mpirun
[brent@node2 mpi]$ mpirun -np 4 --
FWIW, I'm running Slurm 2.1.0 -- I haven't updated to 2.2.x. yet.
Just to be sure, I re-ran my test with OMPI 1.4.3 (I was using the OMPI
development SVN trunk before) and got the same results:
$ srun env | egrep ^SLURM_ | wc -l
144
$ mpirun -np 4 --bynode env | egrep ^SLURM_ | wc -l
144
--
On Feb 24, 2011, at 11:15 AM, Henderson, Brent wrote:
> Note that the parent of the sleep processes is orted and that orted was
> started by slurmstepd. Unless orted is updating the slurm variables for the
> children (which is doubtful) then they will not contain the specific settings
> that I
> -Original Message-
> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
> Behalf Of Jeff Squyres
> Sent: Thursday, February 24, 2011 10:20 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] SLURM environment variables at runtime
>
> On Feb 24, 2011, at 11:15 AM, Hen
Just to follow up on Jeff's comments:
I'm a member of the MPI-3 RMA committee and we are working on improving
the current state of the RMA spec. Right now it's not possible to ask
for local completion of specific RMA operations. Part of the current
RMA proposal is an extension that would all
On Feb 24, 2011, at 2:59 PM, Henderson, Brent wrote:
> [snip]
> They really can't be all SLURM_PROCID=0 - that is supposed to be unique for
> the job - right? It appears that the SLURM_PROCID is inherited from the
> orted parent - which makes a fair amount of sense given how things are
> launc
The issues have been identified deep into the tuned collective component. It
has been fixed in the trunk and 1.5 a while back, but never pushed in the 1.4.
I attached a patch to the ticket, and force its way into the next 1.4 release.
Thanks,
george.
On Feb 14, 2011, at 13:11 , Jeff Squyr
I guess I wasn't clear earlier - I don't know anything about how HP-MPI
works. I was only theorizing that perhaps they did something different that
results in some other slurm vars showing up in Brent's tests. From Brent's
comments, I guess they don't - but they launch jobs in a different manner
th
28 matches
Mail list logo