Re: [OMPI users] I can't build openmpi 4.0.X using PMIx 3.1.5 to use with Slurm

2020-05-12 Thread Jeff Squyres (jsquyres) via users
It looks like you are building both static and dynamic libraries 
(--enable-static and --enable-shared).  This might be confusing the issue -- I 
can see at least one warning:

icc: warning #10237: -lcilkrts linked in dynamically, static library not 
available

It's not easy to tell from the snippets you sent what other downstream side 
effects this might have.

Is there a reason to compile statically?  It generally leads to (much) bigger 
executables, and far less memory efficiency (i.e., the library is not shared in 
memory between all the MPI processes running on each node).  Also, the link 
phase of compilers tends to prefer shared libraries, so unless your apps are 
compiled/linked with whatever the compiler's "link this statically" flags are, 
it's going to likely default to using the shared libraries.

This is a long way of saying: try building everything with just --enable-shared 
(and not --enable-static).  Or possibly just remove both flags; --enable-shared 
is the default.




On May 11, 2020, at 9:23 AM, Leandro via users 
mailto:users@lists.open-mpi.org>> wrote:

Hi,

I'm trying to start using Slurm, and I followed all the instructions ti build 
PMIx, Slurm using pmix, but I can't make openmpi to work.

According to PMIx documentation, I should compile openmpi using 
"--with-ompi-pmix-rte" but when I tried, It fails. I need to build this as 
CentOS rpms.

Thanks in advance for your help. I pasted some info below.

libtool: link: 
/tgdesenv/dist/compiladores/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/icc
 -std=gnu99 -std=gnu99 -DOPAL_CONFIGURE_USER=\"root\" 
-DOPAL_CONFIGURE_HOST=\"gr10b17n05\" "-DOPAL_CONFIGURE_DATE=\"Fri May  8 
13:35:51 -03 2020\"" -DOMPI_BUILD_USER=\"root\" 
-DOMPI_BUILD_HOST=\"gr10b17n05\" "-DOMPI_BUILD_DATE=\"Fri May  8 13:47:32 -03 
2020\"" "-DOMPI_BUILD_CFLAGS=\"-DNDEBUG -O3 -finline-functions 
-fno-strict-aliasing -restrict -Qoption,cpp,--extended_float_types -pthread\"" 
"-DOMPI_BUILD_CPPFLAGS=\"-I../../.. -I../../../orte/include\"" 
"-DOMPI_BUILD_CXXFLAGS=\"-DNDEBUG -O3 -finline-functions -pthread\"" 
"-DOMPI_BUILD_CXXCPPFLAGS=\"-I../../..  \"" "-DOMPI_BUILD_FFLAGS=\"-O2 -g -pipe 
-Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
--param=ssp-buffer-size=4 -grecord-gcc-switches   -m64 -mtune=generic 
-I/usr/lib64/gfortran/modules\"" -DOMPI_BUILD_FCFLAGS=\"-O3\" 
"-DOMPI_BUILD_LDFLAGS=\"-Wc,-static-intel -static-intel-L/usr/lib64\"" 
"-DOMPI_BUILD_LIBS=\"-lrt -lutil  -lz  -lhwloc  -levent -levent_pthreads\"" 
-DOPAL_CC_ABSOLUTE=\"\" -DOMPI_CXX_ABSOLUTE=\"none\" -DNDEBUG -O3 
-finline-functions -fno-strict-aliasing -restrict 
-Qoption,cpp,--extended_float_types -pthread -static-intel -static-intel -o 
.libs/ompi_info ompi_info.o param.o  -L/usr/lib64 ../../../ompi/.libs/libmpi.so 
-L/usr/lib -llustreapi 
/root/rpmbuild/BUILD/openmpi-4.0.2/opal/.libs/libopen-pal.so 
../../../opal/.libs/libopen-pal.so -lfabric -lucp -lucm -lucs -luct -lrdmacm 
-libverbs /usr/lib64/libpmix.so -lmunge -lrt -lutil -lz /usr/lib64/libhwloc.so 
-lm -ludev -lltdl -levent -levent_pthreads -pthread -Wl,-rpath -Wl,/usr/lib64
icc: warning #10237: -lcilkrts linked in dynamically, static library not 
available
../../../ompi/.libs/libmpi.so: undefined reference to `orte_process_info'
../../../ompi/.libs/libmpi.so: undefined reference to `orte_show_help'
make[2]: *** [ompi_info] Error 1
make[2]: Leaving directory 
`/root/rpmbuild/BUILD/openmpi-4.0.2/ompi/tools/ompi_info'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/rpmbuild/BUILD/openmpi-4.0.2/ompi'
make: *** [all-recursive] Error 1
error: Bad exit status from /var/tmp/rpm-tmp.RyklCR (%build)

The orte libraries are missing. When I don't use "-with-ompi-pmix-rte" it 
builds, but neither mpirun or srun works:

c315@gr10b17n05 /bw1nfs1/Projetos1/c315/Meus_testes > cat machine_file
gr10b17n05
gr10b17n06
gr10b17n07
gr10b17n08
c315@gr10b17n05 /bw1nfs1/Projetos1/c315/Meus_testes > mpirun -machinefile 
machine_file ./mpihello
[gr10b17n07:115065] [[21391,0],2] ORTE_ERROR_LOG: Not found in file 
base/ess_base_std_orted.c at line 362
--
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  opal_pmix_base_select failed
  --> Returned value Not found (-13) instead of ORTE_SUCCESS
--
--
ORTE was unable to reliably start one or more daemons.
This usually is caused by:

* not finding the required libraries and/or binaries on
  one or more nodes. Please check your PATH and LD_LI

Re: [OMPI users] I can't build openmpi 4.0.X using PMIx 3.1.5 to use with Slurm

2020-05-12 Thread Leandro via users
HI,

I compile it statically to make sure compilers libraries will not be a
dependency, and I do this way for years. The developers said they want this
way, so I did.

I saw this warning and this library is related with omnipath, which we
don't have.

---
*Leandro*


On Tue, May 12, 2020 at 8:27 AM Jeff Squyres (jsquyres) 
wrote:

> It looks like you are building both static and dynamic libraries
> (--enable-static and --enable-shared).  This might be confusing the issue
> -- I can see at least one warning:
>
> icc: warning #10237: -lcilkrts linked in dynamically, static library not
> available
>
>
> It's not easy to tell from the snippets you sent what other downstream
> side effects this might have.
>
> Is there a reason to compile statically?  It generally leads to (much)
> bigger executables, and far less memory efficiency (i.e., the library is
> not shared in memory between all the MPI processes running on each node).
> Also, the link phase of compilers tends to prefer shared libraries, so
> unless your apps are compiled/linked with whatever the compiler's "link
> this statically" flags are, it's going to likely default to using the
> shared libraries.
>
> This is a long way of saying: try building everything with just
> --enable-shared (and not --enable-static).  Or possibly just remove both
> flags; --enable-shared is the default.
>
>
>
>
> On May 11, 2020, at 9:23 AM, Leandro via users 
> wrote:
>
> Hi,
>
> I'm trying to start using Slurm, and I followed all the instructions ti
> build PMIx, Slurm using pmix, but I can't make openmpi to work.
>
> According to PMIx documentation, I should compile openmpi using
> "--with-ompi-pmix-rte" but when I tried, It fails. I need to build this as
> CentOS rpms.
>
> Thanks in advance for your help. I pasted some info below.
>
> libtool: link:
> /tgdesenv/dist/compiladores/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/icc
> -std=gnu99 -std=gnu99 -DOPAL_CONFIGURE_USER=\"root\"
> -DOPAL_CONFIGURE_HOST=\"gr10b17n05\" "-DOPAL_CONFIGURE_DATE=\"Fri May  8
> 13:35:51 -03 2020\"" -DOMPI_BUILD_USER=\"root\"
> -DOMPI_BUILD_HOST=\"gr10b17n05\" "-DOMPI_BUILD_DATE=\"Fri May  8 13:47:32
> -03 2020\"" "-DOMPI_BUILD_CFLAGS=\"-DNDEBUG -O3 -finline-functions
> -fno-strict-aliasing -restrict -Qoption,cpp,--extended_float_types
> -pthread\"" "-DOMPI_BUILD_CPPFLAGS=\"-I../../.. -I../../../orte/include
>  \"" "-DOMPI_BUILD_CXXFLAGS=\"-DNDEBUG -O3 -finline-functions -pthread\""
> "-DOMPI_BUILD_CXXCPPFLAGS=\"-I../../..  \"" "-DOMPI_BUILD_FFLAGS=\"-O2 -g
> -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong
> --param=ssp-buffer-size=4 -grecord-gcc-switches   -m64 -mtune=generic
> -I/usr/lib64/gfortran/modules\"" -DOMPI_BUILD_FCFLAGS=\"-O3\"
> "-DOMPI_BUILD_LDFLAGS=\"-Wc,-static-intel -static-intel-L/usr/lib64\""
> "-DOMPI_BUILD_LIBS=\"-lrt -lutil  -lz  -lhwloc  -levent -levent_pthreads\""
> -DOPAL_CC_ABSOLUTE=\"\" -DOMPI_CXX_ABSOLUTE=\"none\" -DNDEBUG -O3
> -finline-functions -fno-strict-aliasing -restrict
> -Qoption,cpp,--extended_float_types -pthread -static-intel -static-intel -o
> .libs/ompi_info ompi_info.o param.o  -L/usr/lib64
> ../../../ompi/.libs/libmpi.so -L/usr/lib -llustreapi
> /root/rpmbuild/BUILD/openmpi-4.0.2/opal/.libs/libopen-pal.so
> ../../../opal/.libs/libopen-pal.so -lfabric -lucp -lucm -lucs -luct
> -lrdmacm -libverbs /usr/lib64/libpmix.so -lmunge -lrt -lutil -lz
> /usr/lib64/libhwloc.so -lm -ludev -lltdl -levent -levent_pthreads -pthread
> -Wl,-rpath -Wl,/usr/lib64
> icc: warning #10237: -lcilkrts linked in dynamically, static library not
> available
> ../../../ompi/.libs/libmpi.so: undefined reference to `orte_process_info'
> ../../../ompi/.libs/libmpi.so: undefined reference to `orte_show_help'
> make[2]: *** [ompi_info] Error 1
> make[2]: Leaving directory
> `/root/rpmbuild/BUILD/openmpi-4.0.2/ompi/tools/ompi_info'
> make[1]: *** [all-recursive] Error 1
> make[1]: Leaving directory `/root/rpmbuild/BUILD/openmpi-4.0.2/ompi'
> make: *** [all-recursive] Error 1
> error: Bad exit status from /var/tmp/rpm-tmp.RyklCR (%build)
>
> The orte libraries are missing. When I don't use "-with-ompi-pmix-rte" it
> builds, but neither mpirun or srun works:
>
> c315@gr10b17n05 /bw1nfs1/Projetos1/c315/Meus_testes > cat machine_file
> gr10b17n05
> gr10b17n06
> gr10b17n07
> gr10b17n08
> c315@gr10b17n05 /bw1nfs1/Projetos1/c315/Meus_testes > mpirun -machinefile
> machine_file ./mpihello
> [gr10b17n07:115065] [[21391,0],2] ORTE_ERROR_LOG: Not found in file
> base/ess_base_std_orted.c at line 362
> --
> It looks like orte_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during orte_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
>
>

Re: [OMPI users] I can't build openmpi 4.0.X using PMIx 3.1.5 to use with Slurm

2020-05-12 Thread Ralph Castain via users
Try adding --without-psm2 to the PMIx configure line - sounds like you have 
that library installed on your machine, even though you don't have omnipath.


On May 12, 2020, at 4:42 AM, Leandro via users mailto:users@lists.open-mpi.org> > wrote:

HI, 

I compile it statically to make sure compilers libraries will not be a 
dependency, and I do this way for years. The developers said they want this 
way, so I did.

I saw this warning and this library is related with omnipath, which we don't 
have.

---
Leandro


On Tue, May 12, 2020 at 8:27 AM Jeff Squyres (jsquyres) mailto:jsquy...@cisco.com> > wrote:
It looks like you are building both static and dynamic libraries 
(--enable-static and --enable-shared).  This might be confusing the issue -- I 
can see at least one warning:

icc: warning #10237: -lcilkrts linked in dynamically, static library not 
available

It's not easy to tell from the snippets you sent what other downstream side 
effects this might have.

Is there a reason to compile statically?  It generally leads to (much) bigger 
executables, and far less memory efficiency (i.e., the library is not shared in 
memory between all the MPI processes running on each node).  Also, the link 
phase of compilers tends to prefer shared libraries, so unless your apps are 
compiled/linked with whatever the compiler's "link this statically" flags are, 
it's going to likely default to using the shared libraries.

This is a long way of saying: try building everything with just --enable-shared 
(and not --enable-static).  Or possibly just remove both flags; --enable-shared 
is the default.




On May 11, 2020, at 9:23 AM, Leandro via users mailto:users@lists.open-mpi.org> > wrote:

Hi,

I'm trying to start using Slurm, and I followed all the instructions ti build 
PMIx, Slurm using pmix, but I can't make openmpi to work.

According to PMIx documentation, I should compile openmpi using 
"--with-ompi-pmix-rte" but when I tried, It fails. I need to build this as 
CentOS rpms.

Thanks in advance for your help. I pasted some info below.

libtool: link: 
/tgdesenv/dist/compiladores/intel/compilers_and_libraries_2019.5.281/linux/bin/intel64/icc
 -std=gnu99 -std=gnu99 -DOPAL_CONFIGURE_USER=\"root\" 
-DOPAL_CONFIGURE_HOST=\"gr10b17n05\" "-DOPAL_CONFIGURE_DATE=\"Fri May  8 
13:35:51 -03 2020\"" -DOMPI_BUILD_USER=\"root\" 
-DOMPI_BUILD_HOST=\"gr10b17n05\" "-DOMPI_BUILD_DATE=\"Fri May  8 13:47:32 -03 
2020\"" "-DOMPI_BUILD_CFLAGS=\"-DNDEBUG -O3 -finline-functions 
-fno-strict-aliasing -restrict -Qoption,cpp,--extended_float_types -pthread\"" 
"-DOMPI_BUILD_CPPFLAGS=\"-I../../.. -I../../../orte/include    \"" 
"-DOMPI_BUILD_CXXFLAGS=\"-DNDEBUG -O3 -finline-functions -pthread\"" 
"-DOMPI_BUILD_CXXCPPFLAGS=\"-I../../..  \"" "-DOMPI_BUILD_FFLAGS=\"-O2 -g -pipe 
-Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong 
--param=ssp-buffer-size=4 -grecord-gcc-switches   -m64 -mtune=generic 
-I/usr/lib64/gfortran/modules\"" -DOMPI_BUILD_FCFLAGS=\"-O3\" 
"-DOMPI_BUILD_LDFLAGS=\"-Wc,-static-intel -static-intel    -L/usr/lib64\"" 
"-DOMPI_BUILD_LIBS=\"-lrt -lutil  -lz  -lhwloc  -levent -levent_pthreads\"" 
-DOPAL_CC_ABSOLUTE=\"\" -DOMPI_CXX_ABSOLUTE=\"none\" -DNDEBUG -O3 
-finline-functions -fno-strict-aliasing -restrict 
-Qoption,cpp,--extended_float_types -pthread -static-intel -static-intel -o 
.libs/ompi_info ompi_info.o param.o  -L/usr/lib64 ../../../ompi/.libs/libmpi.so 
-L/usr/lib -llustreapi 
/root/rpmbuild/BUILD/openmpi-4.0.2/opal/.libs/libopen-pal.so 
../../../opal/.libs/libopen-pal.so -lfabric -lucp -lucm -lucs -luct -lrdmacm 
-libverbs /usr/lib64/libpmix.so -lmunge -lrt -lutil -lz /usr/lib64/libhwloc.so 
-lm -ludev -lltdl -levent -levent_pthreads -pthread -Wl,-rpath -Wl,/usr/lib64
icc: warning #10237: -lcilkrts linked in dynamically, static library not 
available
../../../ompi/.libs/libmpi.so: undefined reference to `orte_process_info'
../../../ompi/.libs/libmpi.so: undefined reference to `orte_show_help'

make[2]: *** [ompi_info] Error 1
make[2]: Leaving directory 
`/root/rpmbuild/BUILD/openmpi-4.0.2/ompi/tools/ompi_info'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/root/rpmbuild/BUILD/openmpi-4.0.2/ompi'
make: *** [all-recursive] Error 1
error: Bad exit status from /var/tmp/rpm-tmp.RyklCR (%build)

The orte libraries are missing. When I don't use "-with-ompi-pmix-rte" it 
builds, but neither mpirun or srun works:

c315@gr10b17n05 /bw1nfs1/Projetos1/c315/Meus_testes > cat machine_file 
gr10b17n05
gr10b17n06
gr10b17n07
gr10b17n08
c315@gr10b17n05 /bw1nfs1/Projetos1/c315/Meus_testes > mpirun -machinefile 
machine_file ./mpihello 
[gr10b17n07:115065] [[21391,0],2] ORTE_ERROR_LOG: Not found in file 
base/ess_base_std_orted.c at line 362
--
It looks like orte_init failed for some reason; your parallel process is

likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to c

Re: [OMPI users] I can't build openmpi 4.0.X using PMIx 3.1.5 to use with Slurm

2020-05-12 Thread Jeff Squyres (jsquyres) via users
On May 12, 2020, at 7:42 AM, Leandro  wrote:
> 
> I compile it statically to make sure compilers libraries will not be a 
> dependency, and I do this way for years. 

For what it's worth, you're compiling with -static-intel, which should take 
care of removing the compiler's libraries as dependencies.

That being said, you could also --disable-shared --enable-static to *only* 
build static libraries.

You could also test with not building static libraries to see if that is 
causing the issue.

Also, are you compiling with an external PMIx?  Open MPI v4.0.x contains PMIx 
3.1.5; you shouldn't need an external PMIx.

Finally, from some of your error messages, those are error messages that I'd 
expect to see when versions of Open MPI and/or PMIx got accidentally mixed up 
between different nodes.  If you're not setting LD_LIBRARY_PATH because you're 
intending to use static libraries, you should triple check that there's no 
other libmpi.so (etc.) that are accidentally getting picked up somewhere.

-- 
Jeff Squyres
jsquy...@cisco.com



Re: [OMPI users] I can't build openmpi 4.0.X using PMIx 3.1.5 to use with Slurm

2020-05-12 Thread Leandro via users
I will try to mess around static and shared libraries, butI need external
PMIx to use with Slurm.

---
*Leandro*


On Tue, May 12, 2020 at 11:19 AM Jeff Squyres (jsquyres) 
wrote:

> On May 12, 2020, at 7:42 AM, Leandro  wrote:
> >
> > I compile it statically to make sure compilers libraries will not be a
> dependency, and I do this way for years.
>
> For what it's worth, you're compiling with -static-intel, which should
> take care of removing the compiler's libraries as dependencies.
>
> That being said, you could also --disable-shared --enable-static to *only*
> build static libraries.
>
> You could also test with not building static libraries to see if that is
> causing the issue.
>
> Also, are you compiling with an external PMIx?  Open MPI v4.0.x contains
> PMIx 3.1.5; you shouldn't need an external PMIx.
>
> Finally, from some of your error messages, those are error messages that
> I'd expect to see when versions of Open MPI and/or PMIx got accidentally
> mixed up between different nodes.  If you're not setting LD_LIBRARY_PATH
> because you're intending to use static libraries, you should triple check
> that there's no other libmpi.so (etc.) that are accidentally getting picked
> up somewhere.
>
> --
> Jeff Squyres
> jsquy...@cisco.com
>
>


[OMPI users] Do I need C++ bindings for Open MPI mpicc

2020-05-12 Thread Konstantinos Konstantinidis via users
Hi,
I have a naive question. I have built Open MPI 3.1.6 on my system after
configuring as follows:
./configure --prefix=/usr/local

I am planning to use Python so I want to build MPI4py 3.0.3 which will be
using the Open MPI implementation. The MPI4py requirements here
 state
that
*"If you use a MPI implementation providing a mpicc compiler wrapper (e.g.,
MPICH, Open MPI), it will be used for compilation and linking. This is the
preferred and easiest way of building MPI for Python."*

So I am wondering whether I need the C++ bindings (if still supported in
Open MPI), i.e., does mpicc needs Open MPI to be configured with
"--enable-mpi-cxx" for MPI4py to work?

I won't be coding in C++ at all.

Thanks,
Konstantinos Konstantinidis


Re: [OMPI users] Do I need C++ bindings for Open MPI mpicc

2020-05-12 Thread Gilles Gouaillardet via users
Hi,

no you do not.

FWIW, MPI C++ bindings were removed from the standard a decade ago.
mpicc is the wrapper for the C compiler, and the wrappers for the C++
compilers are mpicxx,mpiCC and mpicxx.
If your C++ application is only using the MPI C bindings, then you do
not need --enable-mpi-cxx for the C++ wrappers to work.
But if your C++ application is using the MPI C++ bindings, you should
consider modernizing it
(plain C bindings, or other C++ abstractions such as Boost.MPI or
Elementals for example)

Cheers,

Gilles

On Wed, May 13, 2020 at 1:00 PM Konstantinos Konstantinidis via users
 wrote:
>
> Hi,
> I have a naive question. I have built Open MPI 3.1.6 on my system after 
> configuring as follows:
> ./configure --prefix=/usr/local
>
> I am planning to use Python so I want to build MPI4py 3.0.3 which will be 
> using the Open MPI implementation. The MPI4py requirements here state that
> "If you use a MPI implementation providing a mpicc compiler wrapper (e.g., 
> MPICH, Open MPI), it will be used for compilation and linking. This is the 
> preferred and easiest way of building MPI for Python."
>
> So I am wondering whether I need the C++ bindings (if still supported in Open 
> MPI), i.e., does mpicc needs Open MPI to be configured with 
> "--enable-mpi-cxx" for MPI4py to work?
>
> I won't be coding in C++ at all.
>
> Thanks,
> Konstantinos Konstantinidis
>
>


Re: [OMPI users] Do I need C++ bindings for Open MPI mpicc

2020-05-12 Thread Konstantinos Konstantinidis via users
Awesome, thanks!

On Tue, May 12, 2020 at 11:15 PM Gilles Gouaillardet <
gilles.gouaillar...@gmail.com> wrote:

> Hi,
>
> no you do not.
>
> FWIW, MPI C++ bindings were removed from the standard a decade ago.
> mpicc is the wrapper for the C compiler, and the wrappers for the C++
> compilers are mpicxx,mpiCC and mpicxx.
> If your C++ application is only using the MPI C bindings, then you do
> not need --enable-mpi-cxx for the C++ wrappers to work.
> But if your C++ application is using the MPI C++ bindings, you should
> consider modernizing it
> (plain C bindings, or other C++ abstractions such as Boost.MPI or
> Elementals for example)
>
> Cheers,
>
> Gilles
>
> On Wed, May 13, 2020 at 1:00 PM Konstantinos Konstantinidis via users
>  wrote:
> >
> > Hi,
> > I have a naive question. I have built Open MPI 3.1.6 on my system after
> configuring as follows:
> > ./configure --prefix=/usr/local
> >
> > I am planning to use Python so I want to build MPI4py 3.0.3 which will
> be using the Open MPI implementation. The MPI4py requirements here state
> that
> > "If you use a MPI implementation providing a mpicc compiler wrapper
> (e.g., MPICH, Open MPI), it will be used for compilation and linking. This
> is the preferred and easiest way of building MPI for Python."
> >
> > So I am wondering whether I need the C++ bindings (if still supported in
> Open MPI), i.e., does mpicc needs Open MPI to be configured with
> "--enable-mpi-cxx" for MPI4py to work?
> >
> > I won't be coding in C++ at all.
> >
> > Thanks,
> > Konstantinos Konstantinidis
> >
> >
>