Hi Brice,
You will need the MLNX_OFED with the GPUDirect support in order to work. I will
check to there's a release of it that supports SLES and let you know.
[pak@maia001 ~]$ /sbin/modinfo ib_core
filename:
/lib/modules/2.6.18-194.nvel5/updates/kernel/drivers/infiniband/core/ib_core.ko
Of Pak Lui
Sent: Monday, February 28, 2011 11:30 AM
To: Open MPI Users
Subject: Re: [OMPI users] anybody tried OMPI with gpudirect?
Hi Brice,
You will need the MLNX_OFED with the GPUDirect support in order to work. I will
check to there's a release of it that supports SLES and let you know.
__
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
Hi Henk,
SLIM H.A. wrote:
Dear Pak Lui
I can delete the (sge) job with qdel -f such that it disappears from the
job list but the application processes keep running, including the
shepherds. I have to kill them with -15
For some reason the kill -15 does not reach mpirun. (We use such a
tion?
Thanks,
~Tim
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
lman/listinfo.cgi/users
- Pak Lui
pak@sun.com
htm/1050715198@Middle5/2041799_2034533/2041733/1?PARTNER=3&OAS_QUERY=null>
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
rintf("MPI_Comm_connect() failled, sleeping and retrying...\n");
}
sleep(1);
}
MPI_Comm_disconnect(&intercomm);
MPI_Finalize();
return 0;
}
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
Component v1.2.5)
I also tried pre-relese 1.2.6rc3 same results.
Prakashan
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
rintf("Processor %d finalizing\n", rank);
MPI_Finalize();
printf("Processor %d Goodbye!\n", rank);
}
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
in your mpirun command and look for the launch
commands that mpirun uses.
Regards,
Romaric
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
Romaric David wrote:
Pak Lui a écrit :
It was fixed at one point in the trunk before v1.3 went official, but
while rolling the code from gridengine PLM into the rsh PLM code, this
feature was left out because there was some lingering issues that I
didn't resolved and I lost track
Reuti wrote:
Hi,
Am 07.07.2008 um 11:31 schrieb Romaric David:
Pak Lui a écrit :
It was fixed at one point in the trunk before v1.3 went official, but
while rolling the code from gridengine PLM into the rsh PLM code,
this feature was left out because there was some lingering issues
that I
Pak Lui wrote:
Romaric David wrote:
Pak Lui a écrit :
It was fixed at one point in the trunk before v1.3 went official, but
while rolling the code from gridengine PLM into the rsh PLM code,
this feature was left out because there was some lingering issues
that I didn't resolved and I
s looks more and more like an SGE issue not
able to accept tasks from multiple queues for parallel job.
btw, you don't need the --with-sge switch in OMPI configure. It's new in
OMPI v1.3 so that we don't build SGE support by default.
My $.02...
- Pak Lui
p...@penguincomputing.com
Penguin
call tm_init again?
If you are curious to know about the implementation for PBS, you can
download the source from openpbs.org. OpenPBS source:
v2.3.16/src/lib/Libifl/tm.c
--
Thanks,
- Pak Lui
pak@sun.com
awn2: MPI_APPNUM = 1
1: ./mspawn2: MPI_APPNUM = 1
Password:
orted: Command not found.
^C^\Quit
--
Thanks,
- Pak Lui
ack'
'build_alias=i686-pc-linux-gnu' 'host_alias=i686-pc-linux-gnu'
--enable-ltdl-convenience\"
Any help would be greatly appreciated.
Thanks.
[1] http://gridengine.sunsource.net/servlets/ReadMsg?list=users&msgNo=15775
--
Eric Thibodeau
Neural Bucket Solutions Inc.
T. (514) 736-1436
C. (514) 710-0517
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Thanks,
- Pak Lui
pak@sun.com
@headless ~ $
Eric
Le vendredi 16 juin 2006 10:31, Pak Lui a écrit :
> Hi, I noticed your prefix set to the lib dir, can you try without the
> lib64 part and rerun?
>
> Eric Thibodeau wrote:
> > Hello everyone,
> >
> > Well, first off, I hope this proble
cesses aborted (not shown)
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Thanks,
- Pak Lui
pak@sun.com
lf Of Pak Lui
Sent: 17 January 2007 19:16
To: Open MPI Users
Subject: Re: [OMPI users] Problems with ompi1.2b2, SGE and
DLPOLY[Scanned]
Sorry for jumping in late.
I was able to use ~128 SGE slots for my test run, with the either of the
SGE allocation rules ($fill_up or $round_robin) and -np 64
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs are
launched by the scheduler, they always stack up on the first node
(node00)
Geoff Galitz wrote:
On Jan 24, 2007, at 7:03 AM, Pak Lui wrote:
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs
are launched by
e no
difference.
Thanks,
Todd Heywood
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
Thanks,
- Pak Lui
pak@sun.com
and
delete this e-mail message from your computer.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
problem could be?
Regards, Götz Waschk
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
--
- Pak Lui
pak@sun.com
27 matches
Mail list logo