Dear Mike, it sounds good... the description fits my purposes... I really miss
this when I was reading srun man page! I will give it a try
Thanks to everybody for the help and support!
F
> On Aug 21, 2014, at 7:58 PM, Mike Dubman wrote:
>
> Hi FIlippo,
>
> I think you can use SLURM_LOCALID v
Update: I got both OpenMPI 1.8.1 and 1.8.2rc4 to configure and build on my
Mac laptop running OS X 10.9.4.
Neither works on the 2-day old Mac Pro, but in investigating this I found
other problems not related to OpenMPI ‹ probably hardware or OS related.
Time to exercise the warranty.
@Ralph : Tha
FWIW: I just tried on my Mac with the Intel 14.0 compilers, and it configured
and built just fine. However, that was with the current state of the 1.8 branch
(the upcoming 1.8.2 release), so you might want to try that in case there is a
difference.
On Aug 21, 2014, at 12:59 PM, Gus Correa wr
Hi Peter
If I remember right from my compilation of OMPI on a Mac
years ago, you need to have X-Code installed, in case you don't.
If vampir-trace is the only problem,
you can disable it when you configure OMPI (--disable-vt).
My two cents,
Gus Correa
On 08/21/2014 03:35 PM, Bosler, Peter And
Good afternoon,
I'm having trouble configuring OpenMPI for use with the Intel compilers. I run
the command "./configure -prefix=/opt/openmpi/intel CC=icc CXX=icpc FC=ifort
2>&1 | tee ~/openmpi-config.out" and I notice three problems:
1. I get two instances of "Report this to
http://www.ope
Hi FIlippo,
I think you can use SLURM_LOCALID var (at least with slurm v14.03.4-2)
$srun -N2 --ntasks-per-node 3 env |grep SLURM_LOCALID
SLURM_LOCALID=1
SLURM_LOCALID=2
SLURM_LOCALID=0
SLURM_LOCALID=0
SLURM_LOCALID=1
SLURM_LOCALID=2
$
Kind Regards,
M
On Thu, Aug 21, 2014 at 9:27 PM, Ralph Cas
On Aug 21, 2014, at 10:58 AM, Filippo Spiga wrote:
> Dear Ralph
>
> On Aug 21, 2014, at 2:30 PM, Ralph Castain wrote:
>> I'm afraid that none of the mapping or binding options would be available
>> under srun as those only work via mpirun. You can pass MCA params in the
>> environment of cou
Dear Ralph
On Aug 21, 2014, at 2:30 PM, Ralph Castain wrote:
> I'm afraid that none of the mapping or binding options would be available
> under srun as those only work via mpirun. You can pass MCA params in the
> environment of course, or in default MCA param files.
I understand. I hopefully
Should not be required (unless they are statically built) as we do strive to
maintain ABI within a series
On Aug 21, 2014, at 9:39 AM, Maxime Boissonneault
wrote:
> Hi,
> Would you say that softwares compiled using OpenMPI 1.8.1 need to be
> recompiled using OpenMPI 1.8.2rc4 to work properly
Hi,
Would you say that softwares compiled using OpenMPI 1.8.1 need to be
recompiled using OpenMPI 1.8.2rc4 to work properly ?
Maxime
Am 21.08.2014 um 16:50 schrieb Reuti:
> Am 21.08.2014 um 16:00 schrieb Ralph Castain:
>
>>
>> On Aug 21, 2014, at 6:54 AM, Reuti wrote:
>>
>>> Am 21.08.2014 um 15:45 schrieb Ralph Castain:
>>>
On Aug 21, 2014, at 2:51 AM, Reuti wrote:
> Am 20.08.2014 um 23:16 schrieb Ralph Cas
Am 21.08.2014 um 16:50 schrieb Reuti:
> Am 21.08.2014 um 16:00 schrieb Ralph Castain:
>
>>
>> On Aug 21, 2014, at 6:54 AM, Reuti wrote:
>>
>>> Am 21.08.2014 um 15:45 schrieb Ralph Castain:
>>>
On Aug 21, 2014, at 2:51 AM, Reuti wrote:
> Am 20.08.2014 um 23:16 schrieb Ralph Cas
Am 21.08.2014 um 16:00 schrieb Ralph Castain:
>
> On Aug 21, 2014, at 6:54 AM, Reuti wrote:
>
>> Am 21.08.2014 um 15:45 schrieb Ralph Castain:
>>
>>> On Aug 21, 2014, at 2:51 AM, Reuti wrote:
>>>
Am 20.08.2014 um 23:16 schrieb Ralph Castain:
>
> On Aug 20, 2014, at 11:16
On Aug 21, 2014, at 6:54 AM, Reuti wrote:
> Am 21.08.2014 um 15:45 schrieb Ralph Castain:
>
>> On Aug 21, 2014, at 2:51 AM, Reuti wrote:
>>
>>> Am 20.08.2014 um 23:16 schrieb Ralph Castain:
>>>
On Aug 20, 2014, at 11:16 AM, Reuti wrote:
> Am 20.08.2014 um 19:05 schrieb
Am 21.08.2014 um 15:45 schrieb Ralph Castain:
> On Aug 21, 2014, at 2:51 AM, Reuti wrote:
>
>> Am 20.08.2014 um 23:16 schrieb Ralph Castain:
>>
>>>
>>> On Aug 20, 2014, at 11:16 AM, Reuti wrote:
>>>
Am 20.08.2014 um 19:05 schrieb Ralph Castain:
>>
>> Aha, this is quite in
On Aug 21, 2014, at 2:51 AM, Reuti wrote:
> Am 20.08.2014 um 23:16 schrieb Ralph Castain:
>
>>
>> On Aug 20, 2014, at 11:16 AM, Reuti wrote:
>>
>>> Am 20.08.2014 um 19:05 schrieb Ralph Castain:
>>>
>
> Aha, this is quite interesting - how do you do this: scanning the
> /proc//
On Aug 20, 2014, at 11:46 PM, Filippo Spiga wrote:
> Hi Joshua,
>
> On Aug 21, 2014, at 12:28 AM, Joshua Ladd wrote:
>> When launching with mpirun in a SLURM environment, srun is only being used
>> to launch the ORTE daemons (orteds.) Since the daemon will already exist on
>> the node from
Not sure I understand. The problem has been fixed in both the trunk and the 1.8
branch now, so you should be able to work with either of those nightly builds.
On Aug 21, 2014, at 12:02 AM, Timur Ismagilov wrote:
> Have i I any opportunity to run mpi jobs?
>
>
> Wed, 20 Aug 2014 10:48:38 -0700
Hi,
Am 20.08.2014 um 20:08 schrieb Oscar Mojica:
> Well, with qconf -sq one.q I got the following:
>
> [oscar@aguia free-noise]$ qconf -sq one.q
> qname one.q
> hostlist compute-1-30.local compute-1-2.local
> compute-1-3.local \
> compute-1-
Hi,
Am 21.08.2014 um 01:56 schrieb tmish...@jcity.maeda.co.jp:
> Reuti,
>
> Sorry for confusing you. Under the managed condition, actually
> -np option is not necessary. So, this cmd line also works for me
> with Torque.
>
> $ qsub -l nodes=10:ppn=N
> $ mpirun -map-by slot:pe=N ./inverse.exe
A
Am 20.08.2014 um 23:16 schrieb Ralph Castain:
>
> On Aug 20, 2014, at 11:16 AM, Reuti wrote:
>
>> Am 20.08.2014 um 19:05 schrieb Ralph Castain:
>>
Aha, this is quite interesting - how do you do this: scanning the
/proc//status or alike? What happens if you don't find enough fr
Have i I any opportunity to run mpi jobs?
Wed, 20 Aug 2014 10:48:38 -0700 от Ralph Castain :
>yes, i know - it is cmr'd
>
>On Aug 20, 2014, at 10:26 AM, Mike Dubman < mi...@dev.mellanox.co.il > wrote:
>>btw, we get same error in v1.8 branch as well.
>>
>>
>>On Wed, Aug 20, 2014 at 8:06 PM, Ralph
Hi Joshua,
On Aug 21, 2014, at 12:28 AM, Joshua Ladd wrote:
> When launching with mpirun in a SLURM environment, srun is only being used to
> launch the ORTE daemons (orteds.) Since the daemon will already exist on the
> node from which you invoked mpirun, this node will not be included in the
23 matches
Mail list logo