Re: [OMPI users] RPM build errors when creating multiple rpms

2008-03-19 Thread Michael Jennings
On Tuesday, 18 March 2008, at 18:18:36 (-0700),
Christopher Irving wrote:

> Well you're half correct.  You're thinking that _prefix is always
> defined as /usr.

No, actually I'm not. :)

> But in the case were install_in_opt is defined they have redefined
> _prefix to be /opt/%{name}/%{version} in which case it is fine for
> one of the openmpi rpms to claim that directory with a %dir
> directive.

Except that you should never do that.  First off, RPMs should never
install in /opt by default.  Secondly, the correct way to support
installing in /opt is to list the necessary prefixes in the RPM
headers so that the --prefix option (or the --relocate option) may be
used at install time.  OpenMPI already has hooks (IIRC) for figuring
things out intelligently based on invocation prefix, so it should fit
quite nicely into this model.

Obviously RPMs only intended for local use can do anything they want,
but RPMs which install in /opt should never be redistributed.

> However I think you missed the point.  I'm not suggesting they need
> to a %{_prefix} statement in the %files section, I'm just pointing
> out what's not the source of the duplicated files. In other words
> %dir %{_prefix} is not the same as %{_prefix} and wont cause all the
> files in _prefix to be included.

That's correct.

> It can't be safely ignored when it causes rpm build to fail.

The warning by itself should never cause rpmbuild to fail.  If it
does, the problem lies elsewhere.  Nothing in either the rpm 4.4 nor 5
code can cause failure at that point.

> Also you don't want to use an %exclude because that would prevent
> the specified files from ever getting included which is not the
> desired result.

If you use %exclude in only one of the locations where the file is
listed (presumably the "less correct" one), it will solve the problem.

Michael

-- 
Michael Jennings 
Linux Systems and Cluster Admin
UNIX and Cluster Computing Group


Re: [OMPI users] parallel molecular Dynamic simulations: All to All Comunication

2008-03-19 Thread Jeff Squyres

On Mar 18, 2008, at 5:52 PM, Chembeti, Ramesh (S&T-Student) wrote:

MY question is : when I printed the results,accelerations on  
processor 0( ie from 1 to nmol/2) are same as the results for serial  
code, where as they arent same for processor 1(nmol/2+1 to nmol). As  
I am learning MPI I couldnt find where it went wrong in doing an all  
to all operation for accleration part ax(i,m),ay(i,m),az(i,m).



I can't really parse your question, and I unfortunately don't have  
time to parse your code.  I see that you're doing 3 bcasts (they're  
not all-to-all, as your comment claims), but I don't know how big they  
are.


The big issue here is how much work each process is doing compared to  
the whole.  If your problem is not "big enough", the communication  
costs can outweigh the computation costs and any benefits you might  
have gained from parallelization will be lost.  In short: you usually  
need a big computational problem before you'll see benefits from  
parallelization.  (yes, there's lots of corner cases; find a textbook  
on parallel computation to see the finer details)


Here's a writeup I did on basic parallel computing many years ago that  
might be helpful:


http://www.osl.iu.edu/~jsquyres/bladeenc/details.php

--
Jeff Squyres
Cisco Systems



Re: [OMPI users] parallel molecular Dynamic simulations: All to AllComunication

2008-03-19 Thread

Thank you Jeff. I am going through your link.

Ramesh

-Original Message-
From: users-boun...@open-mpi.org on behalf of Jeff Squyres
Sent: Wed 3/19/2008 7:59 PM
To: Open MPI Users
Subject: Re: [OMPI users] parallel molecular Dynamic simulations: All to 
AllComunication
 
On Mar 18, 2008, at 5:52 PM, Chembeti, Ramesh (S&T-Student) wrote:

> MY question is : when I printed the results,accelerations on  
> processor 0( ie from 1 to nmol/2) are same as the results for serial  
> code, where as they arent same for processor 1(nmol/2+1 to nmol). As  
> I am learning MPI I couldnt find where it went wrong in doing an all  
> to all operation for accleration part ax(i,m),ay(i,m),az(i,m).


I can't really parse your question, and I unfortunately don't have  
time to parse your code.  I see that you're doing 3 bcasts (they're  
not all-to-all, as your comment claims), but I don't know how big they  
are.

The big issue here is how much work each process is doing compared to  
the whole.  If your problem is not "big enough", the communication  
costs can outweigh the computation costs and any benefits you might  
have gained from parallelization will be lost.  In short: you usually  
need a big computational problem before you'll see benefits from  
parallelization.  (yes, there's lots of corner cases; find a textbook  
on parallel computation to see the finer details)

Here's a writeup I did on basic parallel computing many years ago that  
might be helpful:

 http://www.osl.iu.edu/~jsquyres/bladeenc/details.php

-- 
Jeff Squyres
Cisco Systems

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

<>