cted expression before '[' token
configure:159455: $? = 1
configure: program exited with status 1
configure: failed program was:
...
So
int fildes[[2]];
and similar
should be replaced to
int fildes[2];
I've attached a diff file which worked for me.
Regards,
Andrew Senin
diff
Description: Binary data
lders. But yesterday I tried recompiling multiple times with no
effect. So I believe this must be somehow related to some unknown
settings in the lab which have been changed. Trying to reproduce the
crash now...
Regards,
Andrew Senin.
On Thu, Jan 19, 2012 at 12:05 AM, Jeff Squyres wrote:
> Ju
slurm 2.3.2
-Andrew
On Tue, Jan 17, 2012 at 6:05 PM, Ralph Castain wrote:
> What version of slurm?
>
>
> Sent from my iPad
>
> On Jan 17, 2012, at 4:36 AM, Andrew Senin wrote:
>
>> Hi Ralph,
>>
>> If you want Mike can provide access to the lab wi
Hi Ralph,
If you want Mike can provide access to the lab with RHEL 6.0 where we
see the problem.
Thanks,
Andrew Senin
On Tue, Jan 17, 2012 at 9:59 AM, Mike Dubman wrote:
> It happensĀ for us on RHEL 6.0
>
>
> On Tue, Jan 17, 2012 at 3:46 AM, Ralph Castain
> wrote:
>>
fault).
-
Thanks,
Andrew Senin
Ralph, Jeff, thanks!
I managed to make it work with the following configure options:
./configure --with-pmi=/usr/ --with-slurm=/usr/ --without-psm
--prefix=`pwd`/install
Regards,
Andrew Senin
On Wed, Jan 11, 2012 at 7:17 PM, Ralph Castain wrote:
> Well, yes - but it isn't quite tha
Is support of SLURM in the head revision of 1.5 branch stable
enough to use it in the lab?
2. Does direct launch of mpi applications require setting the
SLURM_STEP_RESV_PORTS environment variable?
Thanks,
Andrew Senin.
ase assist if this is a bug or I'm doing something improperly?
Regards,
Andrew Senin
tions with ompi_info --all | grep fca command.
Regards,
Andrew Senin.
On Fri, Sep 2, 2011 at 10:18 PM, Konz, Jeffrey (SSA Solution Centers) <
jeffrey.k...@hp.com> wrote:
>
> I see that OpenMPI 1.5.x supports the Mellanox/Voltaire FCA MPI Collective
> Accelerator.
>
> Are an
ojects/gather
- ~/projects/distribs/openmpi-1.4.3/install/bin/mpicc -o gather ./gather.c
- ~/projects/distribs/openmpi-1.4.3/install/bin/mpirun -n 9 ./gather
- crash!
-Andrew
On Wed, May 25, 2011 at 10:48 PM, Andrew Senin wrote:
> Not exactly. I have 16 core nodes. Even if I run all 9 ranks on th
Not exactly. I have 16 core nodes. Even if I run all 9 ranks on the same
node it fails (with --mca btl sm,self). I also tried running on different
nodes (3 nodes, 3 ranks each on each node) with openib and tcp - the same
effect. Also as I wrote in another message I could see this effect on vbox
wit
ash
>
> Andrew,
>
> I tried with a freshly installed 1.4.3 but I can't reproduce your
> issue. I tried with the 1.5 and the trunk and all complete your code
> without errors. Not even valgrind found anything to complain about ...
>
> george.
>
>
> On M
ers] MPI_Allgather with derived type crash
>
> On Wednesday, May 25, 2011 01:16:04 PM Andrew Senin wrote:
> > Hello list,
> >
> > I have an application which uses MPI_Allgather with derived types. It
> > works correctly with mpich2 and mvapich2. However it crashes
> >
Hello list,
I have an application which uses MPI_Allgather with derived types. It works
correctly with mpich2 and mvapich2. However it crashes periodically with
openmpi2. After investigation I found that the crash takes place when I use
derived datatypes with MPI_AllGather and number of ranks g
14 matches
Mail list logo