Hi Tim,
Well, in general and not on MIC I usually build the MPI stacks using the Intel compiler set. Have you ran into s/w that requires GCC instead of Intel compilers (beside Nvidia Cuda)? Did you try to use Intel compiler to produce MIC native code (the OpenMPI stack for that matter)? regards Michael On Mon, Jul 8, 2013 at 4:30 PM, Tim Carlson <tim.carl...@pnl.gov> wrote: > On Mon, 8 Jul 2013, Elken, Tom wrote: > > It isn't quite so easy. > > Out of the box, there is no gcc on the Phi card. You can use the cross > compiler on the host, but you don't get gcc on the Phi by default. > > See this post > http://software.intel.com/en-**us/forums/topic/382057<http://software.intel.com/en-us/forums/topic/382057> > > I really think you would need to build and install gcc on the Phi first. > > My first pass at doing a cross-compile with the GNU compilers failed to > produce something with OFED support (not surprising) > > export PATH=/usr/linux-k1om-4.7/bin:$**PATH > ./configure --build=x86_64-unknown-linux-**gnu --host=x86_64-k1om-linux \ > --disable-mpi-f77 > > checking if MCA component btl:openib can compile... no > > > Tim > > > >> >> >> Thanks Tom, that sounds good. I will give it a try as soon as our Phi host >> here host gets installed. >> >> >> >> I assume that all the prerequisite libs and bins on the Phi side are >> available when we download the Phi s/w stack from Intel's site, right ? >> >> [Tom] >> >> Right. When you install Intel’s MPSS (Manycore Platform Software Stack), >> including following the section on “OFED Support” in the readme file, you >> should have all the prerequisite libs and bins. Note that I have not >> built >> Open MPI for Xeon Phi for your interconnect, but it seems to me that it >> should work. >> >> >> >> -Tom >> >> >> >> Cheers >> >> Michael >> >> >> >> >> >> >> >> On Mon, Jul 8, 2013 at 12:10 PM, Elken, Tom <tom.el...@intel.com> wrote: >> >> Do you guys have any plan to support Intel Phi in the future? That is, >> running MPI code on the Phi cards or across the multicore and Phi, as >> Intel >> MPI does? >> >> [Tom] >> >> Hi Michael, >> >> Because a Xeon Phi card acts a lot like a Linux host with an x86 >> architecture, you can build your own Open MPI libraries to serve this >> purpose. >> >> Our team has used existing (an older 1.4.3 version of) Open MPI source to >> build an Open MPI for running MPI code on Intel Xeon Phi cards over >> Intel’s >> (formerly QLogic’s) True Scale InfiniBand fabric, and it works quite >> well. >> We have not released a pre-built Open MPI as part of any Intel software >> release. But I think if you have a compiler for Xeon Phi (Intel Compiler >> or GCC) and an interconnect for it, you should be able to build an Open >> MPI >> that works on Xeon Phi. >> >> Cheers, >> Tom Elken >> >> thanks... >> >> Michael >> >> >> >> On Sat, Jul 6, 2013 at 2:36 PM, Ralph Castain <r...@open-mpi.org> wrote: >> >> Rolf will have to answer the question on level of support. The CUDA code >> is >> not in the 1.6 series as it was developed after that series went "stable". >> It is in the 1.7 series, although the level of support will likely be >> incrementally increasing as that "feature" series continues to evolve. >> >> >> >> On Jul 6, 2013, at 12:06 PM, Michael Thomadakis <drmichaelt7...@gmail.com >> > >> wrote: >> >> > Hello OpenMPI, >> > >> > I am wondering what level of support is there for CUDA and GPUdirect on >> OpenMPI 1.6.5 and 1.7.2. >> > >> > I saw the ./configure --with-cuda=CUDA_DIR option in the FAQ. However, >> it >> seems that with configure v1.6.5 it was ignored. >> > >> > Can you identify GPU memory and send messages from it directly without >> copying to host memory first? >> > >> > >> > Or in general, what level of CUDA support is there on 1.6.5 and 1.7.2 ? >> Do >> you support SDK 5.0 and above? >> > >> > Cheers ... >> > Michael >> >> > ______________________________**_________________ >> > users mailing list >> > us...@open-mpi.org >> > http://www.open-mpi.org/**mailman/listinfo.cgi/users<http://www.open-mpi.org/mailman/listinfo.cgi/users> >> >> >> ______________________________**_________________ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/**mailman/listinfo.cgi/users<http://www.open-mpi.org/mailman/listinfo.cgi/users> >> >> >> >> >> ______________________________**_________________ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/**mailman/listinfo.cgi/users<http://www.open-mpi.org/mailman/listinfo.cgi/users> >> >> >> >> >> >> > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users >