On 04/08/2014 06:37 AM, Jeff Squyres (jsquyres) wrote:
You should ping the Rocks maintainers and ask them to upgrade.
Open MPI 1.4.3 was released in September of 2010.
On Rocks, you can install OpenMPI from source (and any other software
application by the way) on their standard NFS shared d
You should ping the Rocks maintainers and ask them to upgrade. Open MPI 1.4.3
was released in September of 2010.
On Apr 8, 2014, at 5:37 AM, Nisha Dhankher -M.Tech(CSE)
wrote:
> latest rocks 6.2 carry this version only
>
>
> On Tue, Apr 8, 2014 at 3:49 AM, Jeff Squyres (jsquyres)
> wrot
and thank you very much
On Tue, Apr 8, 2014 at 3:07 PM, Nisha Dhankher -M.Tech(CSE) <
nishadhankher-coaese...@pau.edu> wrote:
> latest rocks 6.2 carry this version only
>
>
> On Tue, Apr 8, 2014 at 3:49 AM, Jeff Squyres (jsquyres) <
> jsquy...@cisco.com> wrote:
>
>> Open MPI 1.4.3 is *ancient*.
latest rocks 6.2 carry this version only
On Tue, Apr 8, 2014 at 3:49 AM, Jeff Squyres (jsquyres)
wrote:
> Open MPI 1.4.3 is *ancient*. Please upgrade -- we just released Open MPI
> 1.8 last week.
>
> Also, please look at this FAQ entry -- it steps you through a lot of basic
> troubleshooting s
Open MPI 1.4.3 is *ancient*. Please upgrade -- we just released Open MPI 1.8
last week.
Also, please look at this FAQ entry -- it steps you through a lot of basic
troubleshooting steps about getting basic MPI programs working.
http://www.open-mpi.org/faq/?category=running#diagnose-multi-host
Mpirun *--mca btl ^openib --mca btl_tcp_if_include eth0* -np 16
-machinefile mf mpiblast -d all.fas -p blastn -i query.fas -o out.txt
was the command i executed on cluster...
On Sat, Apr 5, 2014 at 12:34 PM, Nisha Dhankher -M.Tech(CSE) <
nishadhankher-coaese...@pau.edu> wrote:
> sorry Ralph m
sorry Ralph my mistake its not names...it is "it does not happen on same
nodes."
On Sat, Apr 5, 2014 at 12:33 PM, Nisha Dhankher -M.Tech(CSE) <
nishadhankher-coaese...@pau.edu> wrote:
> same vm on all machines that is virt-manager
>
>
> On Sat, Apr 5, 2014 at 12:32 PM, Nisha Dhankher -M.Tech(CSE
same vm on all machines that is virt-manager
On Sat, Apr 5, 2014 at 12:32 PM, Nisha Dhankher -M.Tech(CSE) <
nishadhankher-coaese...@pau.edu> wrote:
> opmpi version 1.4.3
>
>
> On Fri, Apr 4, 2014 at 8:13 PM, Ralph Castain wrote:
>
>> Okay, so if you run mpiBlast on all the non-name nodes, every
opmpi version 1.4.3
On Fri, Apr 4, 2014 at 8:13 PM, Ralph Castain wrote:
> Okay, so if you run mpiBlast on all the non-name nodes, everything is
> okay? What do you mean by "names nodes"?
>
>
> On Apr 4, 2014, at 7:32 AM, Nisha Dhankher -M.Tech(CSE) <
> nishadhankher-coaese...@pau.edu> wrote:
>
Okay, so if you run mpiBlast on all the non-name nodes, everything is okay?
What do you mean by "names nodes"?
On Apr 4, 2014, at 7:32 AM, Nisha Dhankher -M.Tech(CSE)
wrote:
> no it does not happen on names nodes
>
>
> On Fri, Apr 4, 2014 at 7:51 PM, Ralph Castain wrote:
> Hi Nisha
>
> I
On Apr 4, 2014, at 7:39 AM, Reuti wrote:
> Am 04.04.2014 um 05:55 schrieb Ralph Castain:
>
>> On Apr 3, 2014, at 8:03 PM, Nisha Dhankher -M.Tech(CSE)
>> wrote:
>>
>>> thankyou Ralph.
>>> Yes cluster is heterogenous...
>>
>> And did you configure OMPI --enable-heterogeneous? And are you runn
Am 04.04.2014 um 05:55 schrieb Ralph Castain:
> On Apr 3, 2014, at 8:03 PM, Nisha Dhankher -M.Tech(CSE)
> wrote:
>
>> thankyou Ralph.
>> Yes cluster is heterogenous...
>
> And did you configure OMPI --enable-heterogeneous? And are you running it
> with ---hetero-nodes? What version of OMPI ar
no it does not happen on names nodes
On Fri, Apr 4, 2014 at 7:51 PM, Ralph Castain wrote:
> Hi Nisha
>
> I'm sorry if my questions appear abrasive - I'm just a little frustrated
> at the communication bottleneck as I can't seem to get a clear picture of
> your situation. So you really don't nee
Hi Nisha
I'm sorry if my questions appear abrasive - I'm just a little frustrated at the
communication bottleneck as I can't seem to get a clear picture of your
situation. So you really don't need to keep calling me "sir" :-)
The error you are hitting is very unusual - it means that the process
sir
smae virt-manager is bein used by all pc's.no i did n't enable
openmpi-hetro.Yes openmpi version is same in all through same kickstart
file.
ok...actually sir...rocks itself installed,configured openmpi and mpich on
it own through hpc roll.
On Fri, Apr 4, 2014 at 9:25 AM, Ralph Castain wrote
On Apr 3, 2014, at 8:03 PM, Nisha Dhankher -M.Tech(CSE)
wrote:
> thankyou Ralph.
> Yes cluster is heterogenous...
And did you configure OMPI --enable-heterogeneous? And are you running it with
---hetero-nodes? What version of OMPI are you using anyway?
Note that we don't care if the host pc'
thankyou Ralph.
Yes cluster is heterogenous...
And i haven't made compute nodes on direct physical nodes (pc's) becoz in
college it is not possible to take whole lab of 32 pc's for your work so i
ran on vm.
In Rocks cluster, frontend give the same kickstart to all the pc's so
openmpi version shoul
What is "mpiformatdb"? We don't have an MPI database in our system, and I have
no idea what that command means
As for that error - it means that the identifier we exchange between processes
is failing to be recognized. This could mean a couple of things:
1. the OMPI version on the two ends is d
i first formatted my database with mpiformatdb command then i ran command :
mpirun -np 64 -machinefile mf mpiblast -d all.fas -p blastn -i query.fas -o
output.txt
but then it gave this error 113 from some hosts and continue to run for
other but with no results even after 2 hours lapsed.on rock
i also made machine file which contain ip adresses of all compute nodes +
.ncbirc file for path to mpiblast and shared ,local storage path
Sir
I ran the same command of mpirun on my college supercomputer 8 nodes each
having 24 processors but it just runninggave no result uptill 3 hours...
i first formatted my database with mpiformatdb command then i ran command :
mpirun -np 64 -machinefile mf mpiblast -d all.fas -p blastn -i query.fas -o
output.txt
but then it gave this error 113 from some hosts and continue to run for
other but with results even after 2 hours lapsed.on rocks 6.
my hostnodes to which this error came are are
10.1.255.254,10.1.255.236,10.1.255,241
On Thu, Apr 3, 2014 at 8:37 PM, Ralph Castain wrote:
> I'm having trouble understanding your note, so perhaps I am getting this
> wrong. Let's see if I can figure out what you said:
>
> * your perl command fail
I'm having trouble understanding your note, so perhaps I am getting this wrong.
Let's see if I can figure out what you said:
* your perl command fails with "no route to host" - but I don't see any host in
your cmd. Maybe I'm just missing something.
* you tried running a couple of "mpirun", but
23 matches
Mail list logo