WChem 5.0) that is built-in
parallelized with TCGMSG.
Thanks
francesco pietra
Yahoo! oneSearch: Finally, mobile search
that gives answers, not web links.
http://mobile.yahoo.com/mobileweb/onesearch?refer=1ONXIC
--This is a problem of the local compiler, not of openmpi.
Attached is the config.log.
Kind regards
francesco pietra
Moody friends. Drama queens. Your life? Nope! - their life, your story. Play
Sims Stories
Sorry for the mistake "CXX=/opt/.icc" below. Actually the command was
issued correctly, with "CXX=/ot/...icpc". I have checked that directly by
searching back for issued commands.
francesco pietra
--- George Bosilca wrote:
> I run into this few weeks ago. The
Sorry for the mistake "CXX=/opt/.icc" below. Actually the command was
issued correctly, with "CXX=/ot/...icpc". I have checked that directly by
searching back for issued commands.
francesco pietra
--- Francesco Pietra wrote:
> Date: Sun, 24 Jun 2007 01:47:13 -0700
Henry:
Thanks.
The "icc" issue did not exist. The command line of ./configure was correct for
CXX respect. What was wrong was the lack of "/bin" before icpc.
Then, both ./configure and make commands OK.
Then, I tried
# checkinstall make install
which aborted (checkinstall has been removed from
that investigation
to realize why the compilation of openmpi seems to be OK according to the test
they suggest.
regards
francesco pietra
--- Francesco Pietra wrote:
> Date: Mon, 25 Jun 2007 09:12:27 -0700 (PDT)
> From: Francesco Pietra
> Subject: Re: [OMPI users] intel/openmpi
> T
location of the include/ and lib/ subdirectories containing
mpi.f
libmpi.a
liblam.a
liblamf77mpi.a
which was confusing to me. None of these libraries on my system and I never
advocated lam
Thanks for helping
francesco pietra
I can't expect that the problem is studied on the openmpi site.
Thanks
francesco pietra
--- Jeff Squyres wrote:
> On Jul 23, 2007, at 5:11 PM, Bert Wesarg wrote:
>
> >> I'm not sure what these command line switches do...? "-openmpi" is
> >> not a s
case would be to adapt flags
amber/openmpi in order that openmpi understands what amber is looking for. What
you wrote probably could help to this regard.
Thanks
francesco pietra
--- Andrey Kaliazin wrote:
Hi all,
I have compiled Amber9 with OpenMPI 1.2 without a hitch.
Machine - an 8-way (16
hanks
francesco pietra
Shape Yahoo! in your own image. Join our Network Research Panel today!
http://surveylink.yahoo.com/gmrs/yahoo_panel_invite.asp?a=7
y.
I rule out the alternative of compiling Amber9 with GNU compilers, which will
run slower.
Thanks
francesco pietra
Pinpoint customers who are looking for what you sell.
http://searchmarketing.yahoo.com/
Are any detailed directions for upgrading (for common guys, not experts, I
mean)? My 1.2.3 version on Debian Linux amd64 runs perfectly.
Thanks
francesco pietra
--- Tim Mattox wrote:
> The Open MPI Team, representing a consortium of research, academic,
> and industry partners, is plea
]
[deb64:03540] *** End of error message ***
mpirun noticed that jpb rank 0 with PID 3537 on node deb64 exited on signal 15
(Terminated).
3 additional processes aborted (not shown)
Thanks
francesco pietra
__
Do You Yahoo!?
Tired of spam? Yahoo! Mail
ing in terms of binary compatibility between versions.
>
>
> On Nov 7, 2007, at 8:05 AM, Francesco Pietra wrote:
>
> > I wonder whether any suggestion can be offered about segmentation
> > fault
> > occurring on running a docking program (DOCK 6.1, written in C) on
&g
--- Adrian Knoth wrote:
> On Wed, Nov 07, 2007 at 08:09:14AM -0500, Jeff Squyres wrote:
>
> > I'm not familiar with DOCK or Debian, but you will definitely have
>
> And last but not least,
Surely not last. My OpenMPI was intel compiled. Simple reason: Amber9, as a
Fortran program, runs fast
On Wed, Nov 07, 2007 at 07:00:31 -0800, Francesco Pietra wrote:
>
I was lucky, given my modest skill with systems. In a couple of hours the
system is OK again.
DOCK, configured for MPICH and compiled gcc, is now running parallel with
pointing to OpenMPI 1.2.3 compiled ifort/icc. Top -i shows
? (several
instances in my system)
What about programs that were compiled from config configured for mpich2,
though run parallel by pointing to Open MPI 1.2 (one instance in my system).
Thanks
francesco pietra
--- Tim Mattox wrote:
> The Open MPI Team, representing a consortium of research, ac
figures stand for me as user) from which I carried out the
compilation.
/usr/local/bin contains:
mpic++
mpicc
mpiCC
mpicxx
mpiexec
mpif77
mpif90
mpirun
ompi_info
orted
orterun
are these overwritten during the new compilation, or some deletion is needed?
Thanks
francesco pietra
If I understand, the
o `opal_progress_mpi_enable'
make[3]: *** [octopus_mpi] Error 1
make[3]: Leaving directory `/home/francesco/octopus/octopus-3.1.0/src/main'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/home/francesco/octopus/octopus-3.1.0/src'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/francesco/octopus/octopus-3.1.0'
make: *** [all] Error 2
francesco@tya64:~/octopus/octopus-3.1.0$
Thanks for help
francesco pietra
fo
reports about previous installation of 1.2.6 version (intel).
Is this situation correct for compiling a program with
FC=/opt/bin/mpif90
CC=/opt/bin/mpicc
etcetera
?
How getting the runtime for the compiled program?
I am asking all that because I have problems in the compilation, so
tha
supporting gamess us parallelization
with openmpi, or any form of mpi as a seed to move to openmpi?
thanks
francesco pietra
supporting gamess us parallelization
with openmpi, or any form of mpi as a seed to move to openmpi?
thanks
francesco pietra
compiler NO refers: gcc or intel? I
would also appreciate very much a direction how to chech gcc and intel
C++ independently.
I checked that:
which ifort
/opt/intel/fce/10.1.015/bin/ifort
in my .bashrc:
. /opt/intel/fce/10.1.015/bin/ifortvars.sh (and iccvars.sh)
thanks
francesco pietra
Observatory - Columbia University
> Palisades, NY, 10964-8000 - USA
> -----
>
> Jeff Squyres wrote:
>>
>> On Apr 2, 2009, at 7:21 AM, Francesco Pietra wrote:
>>
>>> Hi:
>>> With debian linux amd64 lenny I tried to install openmpi-1.3.1 instead
>>>
As far as I understand,
apt-get install libnuma1 libnuma-dbg libnuma-dev
installs libnuma libraries 64 bit in /usr/lib there also two symlinks
/usr/lib32 and /usr/lib64 but there is no trace of the libnuma
libraries there.
That the computational codes I use run twice as fast at 64 bit than 32 bi
n?
thanks
francesco
On Thu, Apr 2, 2009 at 1:46 PM, Jeff Squyres wrote:
> On Apr 2, 2009, at 7:21 AM, Francesco Pietra wrote:
>
>> Hi:
>> With debian linux amd64 lenny I tried to install openmpi-1.3.1 instead
>> of using the executables openmpi-1.2.6 of previous disks. I c
/ifort --with-libnuma=/usr/lib
failed because
"expected file /usr/lib/include/numa.h was not found"
In debian amd64 lenny numa.h has a different location
"/usr/include/numa.h". Attached is the config.log.
I would appreciate help in circumventing the problem.
Thanks
france
I was not sure whether that is a technically correct procedure. It works. Thanks
francesco
On Fri, Apr 3, 2009 at 6:05 PM, John Hearns wrote:
> 2009/4/3 Francesco Pietra :
>> > "expected file /usr/lib/include/numa.h was not found"
>>
>> In debian amd64 len
I'm interested in does not want to be compiled. Now,
with the parallel support in order, I hope.
francesco
On Fri, Apr 3, 2009 at 7:04 PM, John Hearns wrote:
> 2009/4/3 Francesco Pietra :
>> I was not sure whether that is a technically correct procedure. It works.
>> Tha
arose at the time of
amd64 etch with the same
configuration of ssh, same compilers, and openmpi 1.2.6.
I am here because the warning from the amber site is that I should to
learn how to use my installation of MPI. Therefore, if there is any
clue ..
thanks
francesco pietra
I hope this helps.
> Gus Correa
> -
> Gustavo Correa
> Lamont-Doherty Earth Observatory - Columbia University
> Palisades, NY, 10964-8000 - USA
> -
Hi Francesco, list
>
> Francesco Pietra wrote:
>>
>> On Mon, Apr 6, 2009 at 5:21 PM, Gus Correa wrote:
>>>
>>> Hi Francesco
>>>
>>> Did you try to run examples/connectivity_c.c,
>>> or examples/hello_c.c before trying amber?
>>
2009 at 11:03 PM, Gus Correa wrote:
> Hi Francesco
>
>
> See answers inline.
>
> Francesco Pietra wrote:
>>
>> Hi Gus:
>> Partial quick answers below. I have reestablished the ssh connection
>> so that tomorrow I'll run the tests. Everything t
y don't
> show up in `man mpirun`).
> And i think you should add a "-n 4" (for 4 processors)
> Furthermore, if you want to specify a host, you have to add "-host hostname1"
> if you want to specify several hosts you have to add "-host
> hostname1,hostnam
3 of 4
Combine with all other investigations, the installation of openmpi
1.3.1 is correct.
thanks a lot for your lesson
francesco
-- Forwarded message ------
From: Francesco Pietra
List-Post: users@lists.open-mpi.org
Date: Tue, Apr 7, 2009 at 11:39 AM
Subject: Re: [OMPI users] ssh MPi a
intels 11 should I first do
anything on openmpi which was compiled with intels 10?
thanks
francesco pietra
print in what I wrote but
at the moment i am unable to do better. All issues about ssh were
resolved. Actually, there was no issue. I created the issues and
apologize for that.
thanks
francesco
On Wed, Apr 8, 2009 at 10:28 AM, Marco wrote:
> * Francesco Pietra [2009 04 06, 16:51]:
>> cd cy
On Wed, Apr 8, 2009 at 12:59 PM, Jeff Squyres wrote:
> On Apr 8, 2009, at 5:08 AM, Francesco Pietra wrote:
>
>> As I was unable to compile the parallel code Amber10 with openmpi
>> 1.3.1 (fully tested) and intel compilers and mkl version 10 on debian
>> amd64 lenny,
ncesco
>
> I hope this helps.
> Gus Correa
> -
> Gustavo Correa
> Lamont-Doherty Earth Observatory - Columbia University
> Palisades, NY, 10964-8000 - USA
> -
linking is highly recommended as opposed to setting
LD_LIBRARY_PATH at run time, not the least because it hardcodes the
paths to the "right" library files in the executables themselves"
Should this be relevant to the present issue, where to learn about
-rpath linking?
thanks
francesco pietra
t-dynamic -lnsl -lutil
thanks
francesco
On Thu, Apr 9, 2009 at 9:29 PM, Gus Correa wrote:
> Hi Francesco
>
> Francesco Pietra wrote:
>>
>> Hi:
>> As failure to find "limits.h" in my attempted compilations of Amber of
>> th fast few days (amd64 lenny,
Sorry, the first line of the ouput below (copied manually) should be rad
/usr/local/bin/mpirun -host deb64 -n 4 connectivity_c 2>&1 | tee connectivity.ou
-- Forwarded message --
From: Francesco Pietra
List-Post: users@lists.open-mpi.org
Date: Fri, Apr 10, 2009 at
; Gus Correa
> -
> Gustavo Correa
> Lamont-Doherty Earth Observatory - Columbia University
> Palisades, NY, 10964-8000 - USA
> -
>
> Francesco Pietra wr
apply to the amber site because they have declined interest in
adapting Amber9 to present software. Unfortunately I don't have two
sufficiently powerful computers for present and vintage status.
Thanks a lot for considering my mail
francesco pietra
On Fri, Apr 10, 2009 at 6:24 PM, Jeff Squyre
f you want to find libimf.so, which is a shared INTEL library,
> pass the library path with a -x on mpirun
>
> mpirun -x LD_LIBRARY_PATH ....
>
> DM
>
>
> On Fri, 10 Apr 2009, Francesco Pietra wrote:
>
>> Hi Gus:
>>
>> If you feel that the observations bel
I used --with-libnuma=/usr since Prentice Bisbal's suggestion and it
worked. Unfortunately, I found no way to fix the failure in finding
libimf.so when compiling openmpi-1.3.1 with intels, as you have seen
in other e-mail from me. And gnu compilers (which work well with both
openmpi and the slower
On Wed, Apr 15, 2009 at 8:39 PM, Prentice Bisbal wrote:
> Francesco Pietra wrote:
>> I used --with-libnuma=/usr since Prentice Bisbal's suggestion and it
>> worked. Unfortunately, I found no way to fix the failure in finding
>> libimf.so when compiling openmpi-1.3.1 wit
rror: libimf.so not found (the library was
sourced with the *.sh intel scripts)
francesco
On Thu, Apr 16, 2009 at 4:43 AM, Nysal Jan wrote:
> You could try statically linking the Intel-provided libraries. Use
> LDFLAGS=-static-intel
>
> --Nysal
>
> On Wed, 2009-04-15 at 21:03
n-interactively. E.g.:
>
> thisnode$ ssh othernode env | sort
>
> shows the relevant stuff in your environment on the other node. Note that
> this is different than
>
> thisnode$ ssh othernode
> othernode$ env | sort
>
>
>
>
> On Apr 16, 2009, at 8:56 AM
from the
internal network.
francesco
On Thu, Apr 16, 2009 at 5:37 PM, Jeff Squyres wrote:
> On Apr 16, 2009, at 11:29 AM, Francesco Pietra wrote:
>
>> francesco@tya64:~$ ssh 192.168.1.33 env | sort
>> HOME=/home/francesco
>> LANG=en_US.UTF-8
>> LOGNAME=francesco
>&
On Thu, Apr 16, 2009 at 5:37 PM, Jeff Squyres wrote:
> On Apr 16, 2009, at 11:29 AM, Francesco Pietra wrote:
>
>> francesco@tya64:~$ ssh 192.168.1.33 env | sort
>> HOME=/home/francesco
>> LANG=en_US.UTF-8
>> LOGNAME=francesco
>> MAIL=/var/mail/francesco
>&
On Thu, Apr 16, 2009 at 6:04 PM, Douglas Guptill wrote:
> On Thu, Apr 16, 2009 at 05:29:14PM +0200, Francesco Pietra wrote:
>> On Thu, Apr 16, 2009 at 3:04 PM, Jeff Squyres wrote:
> ...
>> Given my inexperience as system analyzer, I assume that I am messing
>> somethi
52 matches
Mail list logo