Sorry I forgot to attach the error log. Here it is.
Riddhi Mehta
Research Group: Maxim Lyutikov, Theoretical High Energy Astrophysics
Dept. of Physics & Astronomy
Purdue University
From: users <users-boun...@lists.open-mpi.org> on behalf of Riddhi A Mehta via
users <users@lists.open-mpi.org>
Reply-To: Open MPI Users <users@lists.open-mpi.org>
Date: Tuesday, August 20, 2019 at 3:06 PM
To: Open MPI Users <users@lists.open-mpi.org>
Cc: Riddhi A Mehta <meht...@purdue.edu>
Subject: Re: [OMPI users] CUDA supported APIs
Hello
I was able to correctly install and test OpenMPI 4.0.1 functionality on my Mac.
However, I am running into another problem. I am running an astrophysical code
named PLUTO which uses certain old routines which have been discontinued in the
newer MPI versions. As a result, I face errors during the ‘make’ process. The
error log is attached as a text file. Can someone guide me as to how I can fix
that and make use of new routines?
Thanks
Riddhi
From: users <users-boun...@lists.open-mpi.org> on behalf of "Zhang, Junchao via
users" <users@lists.open-mpi.org>
Reply-To: Open MPI Users <users@lists.open-mpi.org>
Date: Monday, August 19, 2019 at 6:17 PM
To: "Fang, Leo" <leof...@bnl.gov>
Cc: "Zhang, Junchao" <jczh...@mcs.anl.gov>, Open MPI Users
<users@lists.open-mpi.org>
Subject: Re: [OMPI users] CUDA supported APIs
Leo,
Thanks for the info. That is interesting. And yes, Having a CUDA aware MPI
API list would be very useful.
--Junchao Zhang
On Mon, Aug 19, 2019 at 10:23 AM Fang, Leo
<leof...@bnl.gov<mailto:leof...@bnl.gov>> wrote:
Hi Junchao,
First, for your second question, the answer is here:
https://www.mail-archive.com/users@lists.open-mpi.org/msg33279.html. I know
this because I also asked it earlier 😊 It'd be nice to have this documented in
the Q&A though.
As for your first question, I am also interested. It'd be nice for Open MPI
core devs to keep the supported API list up-to-date. We recently added support
of CUDA-aware MPI to mpi4py, and such a list is important for us to keep track
upstream support so that we know whether a test fails due to lack of
CUDA-awareness or because we messed up (much less likely).
Thanks.
Sincerely,
Leo
---
Yao-Lung Leo Fang
Assistant Computational Scientist
Computational Science Initiative
Brookhaven National Laboratory
Bldg. 725, Room 2-169
P.O. Box 5000, Upton, NY 11973-5000
Office: (631) 344-3265
Email: leof...@bnl.gov<mailto:leof...@bnl.gov>
Website: https://leofang.github.io/
________________________________
寄件者: users
<users-boun...@lists.open-mpi.org<mailto:users-boun...@lists.open-mpi.org>> 代表
Zhang, Junchao via users
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>>
寄件日期: 2019年8月15日 上午 11:52:56
收件者: Open MPI Users <users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>>
副本: Zhang, Junchao <jczh...@mcs.anl.gov<mailto:jczh...@mcs.anl.gov>>
主旨: Re: [OMPI users] CUDA supported APIs
Another question: If MPI_Allgatherv(const void *sendbuf, int sendcount,
MPI_Datatype sendtype, void *recvbuf, const int recvcounts[],const int
displs[], MPI_Datatype recvtype, MPI_Comm comm) is cuda aware, are recvcounts,
displs in CPU memory or GPU memory?
--Junchao Zhang
On Thu, Aug 15, 2019 at 9:55 AM Junchao Zhang
<jczh...@mcs.anl.gov<mailto:jczh...@mcs.anl.gov>> wrote:
Hi,
Are the APIs at
https://www.open-mpi.org/faq/?category=runcuda#mpi-apis-cuda<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.open-2Dmpi.org_faq_-3Fcategory-3Druncuda-23mpi-2Dapis-2Dcuda&d=DwMGaQ&c=aTOVZmpUfPKZuaG9NO7J7Mh6imZbfhL47t9CpZ-pCOw&r=xdA_wfZm0r4KH07in_vhZg&m=RZswVqXwi-LuVtqni8ecrzkJU3WCkvSRUw1u7n32neQ&s=FKXyPpx3kLJRAQirASnnXD2Q-HLG3G0XMwvXmJ4sPdQ&e=>
latest? I could not find MPI_Neighbor_xxx and MPI_Reduce_local.
Thanks.
--Junchao Zhang
{\rtf1\ansi\ansicpg1252\cocoartf1671\cocoasubrtf600
{\fonttbl\f0\fswiss\fcharset0 Helvetica;\f1\fnil\fcharset0 Menlo-Bold;\f2\fnil\fcharset0 Menlo-Regular;
}
{\colortbl;\red255\green255\blue255;\red0\green0\blue0;\red202\green51\blue35;\red57\green192\blue38;
}
{\*\expandedcolortbl;;\csgray\c0;\cssrgb\c83898\c28665\c18024;\cssrgb\c25704\c77963\c19556;
}
\margl1440\margr1440\vieww10800\viewh8400\viewkind0
\pard\tx720\tx1440\tx2160\tx2880\tx3600\tx4320\tx5040\tx5760\tx6480\tx7200\tx7920\tx8640\pardirnatural\partightenfactor0
\f0\fs24 \cf0 Mpirun error\
\
\pard\tx560\tx1120\tx1680\tx2240\tx2800\tx3360\tx3920\tx4480\tx5040\tx5600\tx6160\tx6720\pardirnatural\partightenfactor0
\f1\b\fs22 \cf2 \CocoaLigature0 /Users/ram/Purdue_Physics/RProjects/Research_SW/PLUTO/Src/Parallel/al_subarray_.c:88:5: \cf3 error: \cf2 static_assert failed
\f2\b0 \
\f1\b "MPI_Type_extent was removed in MPI-3.0. Use MPI_Type_get_extent instead."
\f2\b0 \
MPI_Type_extent(oldtype, (MPI_Aint *) &extent);\
\f1\b \cf4 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:2820:31: note:
\f2\b0 expanded from macro 'MPI_Type_extent'\
#define MPI_Type_extent(...) THIS_SYMBOL_WAS_REMOVED_IN_MPI30(MPI_Type_extent, MPI_Type_get_extent)\
\f1\b \cf4 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:322:57: note:
\f2\b0 expanded from macro\
'THIS_SYMBOL_WAS_REMOVED_IN_MPI30'\
#define THIS_SYMBOL_WAS_REMOVED_IN_MPI30(func, newfunc) _Static_assert(0, #func " was removed in MPI-3.0. Use " #newfunc ...\
\f1\b \cf4 ^ ~
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/PLUTO/Src/Parallel/al_subarray_.c:113:17: \cf3 error: \cf2 static_assert failed
\f2\b0 \
\f1\b "MPI_Type_hvector was removed in MPI-3.0. Use MPI_Type_create_hvector instead."
\f2\b0 \
MPI_Type_hvector(array_of_subsizes[i], 1, size, tmp1, &tmp2);\
\f1\b \cf4 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:2822:32: note:
\f2\b0 expanded from macro 'MPI_Type_hvector'\
#define MPI_Type_hvector(...) THIS_SYMBOL_WAS_REMOVED_IN_MPI30(MPI_Type_hvector, MPI_Type_create_hvector)\
\f1\b \cf4 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:322:57: note:
\f2\b0 expanded from macro\
'THIS_SYMBOL_WAS_REMOVED_IN_MPI30'\
#define THIS_SYMBOL_WAS_REMOVED_IN_MPI30(func, newfunc) _Static_assert(0, #func " was removed in MPI-3.0. Use " #newfunc ...\
\f1\b \cf4 ^ ~
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/PLUTO/Src/Parallel/al_subarray_.c:142:17: \cf3 error: \cf2 static_assert failed
\f2\b0 \
\f1\b "MPI_Type_hvector was removed in MPI-3.0. Use MPI_Type_create_hvector instead."
\f2\b0 \
MPI_Type_hvector(array_of_subsizes[i], 1, size, tmp1, &tmp2);\
\f1\b \cf4 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:2822:32: note:
\f2\b0 expanded from macro 'MPI_Type_hvector'\
#define MPI_Type_hvector(...) THIS_SYMBOL_WAS_REMOVED_IN_MPI30(MPI_Type_hvector, MPI_Type_create_hvector)\
\f1\b \cf4 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:322:57: note:
\f2\b0 expanded from macro\
'THIS_SYMBOL_WAS_REMOVED_IN_MPI30'\
#define THIS_SYMBOL_WAS_REMOVED_IN_MPI30(func, newfunc) _Static_assert(0, #func " was removed in MPI-3.0. Use " #newfunc ...\
\f1\b \cf4 ^ ~
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/PLUTO/Src/Parallel/al_subarray_.c:169:16: \cf3 error: \cf2 expected expression
\f2\b0 \
types[0] = MPI_LB;\
\f1\b \cf4 ^
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:1087:24: note:
\f2\b0 expanded from macro 'MPI_LB'\
# define MPI_LB THIS_SYMBOL_WAS_REMOVED_IN_MPI30(MPI_LB, MPI_Type_create_resized);\
\f1\b \cf4 ^
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:322:57: note:
\f2\b0 expanded from macro\
'THIS_SYMBOL_WAS_REMOVED_IN_MPI30'\
#define THIS_SYMBOL_WAS_REMOVED_IN_MPI30(func, newfunc) _Static_assert(0, #func " was removed in MPI-3.0. Use " #newfunc ...\
\f1\b \cf4 ^
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/PLUTO/Src/Parallel/al_subarray_.c:171:16: \cf3 error: \cf2 expected expression
\f2\b0 \
types[2] = MPI_UB;\
\f1\b \cf4 ^
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:1086:24: note:
\f2\b0 expanded from macro 'MPI_UB'\
# define MPI_UB THIS_SYMBOL_WAS_REMOVED_IN_MPI30(MPI_UB, MPI_Type_create_resized);\
\f1\b \cf4 ^
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:322:57: note:
\f2\b0 expanded from macro\
'THIS_SYMBOL_WAS_REMOVED_IN_MPI30'\
#define THIS_SYMBOL_WAS_REMOVED_IN_MPI30(func, newfunc) _Static_assert(0, #func " was removed in MPI-3.0. Use " #newfunc ...\
\f1\b \cf4 ^
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/PLUTO/Src/Parallel/al_subarray_.c:173:5: \cf3 error: \cf2 static_assert failed
\f2\b0 \
\f1\b "MPI_Type_struct was removed in MPI-3.0. Use MPI_Type_create_struct instead."
\f2\b0 \
MPI_Type_struct(3, blklens, disps, types, newtype);\
\f1\b \cf4 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:2824:31: note:
\f2\b0 expanded from macro 'MPI_Type_struct'\
#define MPI_Type_struct(...) THIS_SYMBOL_WAS_REMOVED_IN_MPI30(MPI_Type_struct, MPI_Type_create_struct)\
\f1\b \cf4 ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\f2\b0 \cf2 \
\f1\b /Users/ram/Purdue_Physics/RProjects/Research_SW/mpi/include/mpi.h:322:57: note:
\f2\b0 expanded from macro\
'THIS_SYMBOL_WAS_REMOVED_IN_MPI30'\
#define THIS_SYMBOL_WAS_REMOVED_IN_MPI30(func, newfunc) _Static_assert(0, #func " was removed in MPI-3.0. Use " #newfunc ...\
\f1\b \cf4 ^ ~
\f2\b0 \cf2 \
6 errors generated.\
make: *** [al_subarray_.o] Error 1\
}
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users