npart should be an array and not a scalar. If you use the F2003 interface, then you don’t need to supply npart.
Scot On Nov 8, 2017, at 5:59 PM, Guido granda muñoz <guidogra...@gmail.com<mailto:guidogra...@gmail.com>> wrote: Hello Scot, The subroutine that includes the declaration of arguments is bellow. The argument declaration is the following: integer(8) :: recl_test,flen_test,iolength_test integer :: error,hdferr,hdferr2,rank INTEGER(HID_T) :: file_id,dset_id,dataspace_id ----------------------------------------------------------------- subroutine load_single_file(filename,filetype) ! loads particles into ! x(:),y(:),z(:),mass(:) ! and updates the variables ! npart = number of particles implicit none character(*),intent(in) :: filename integer*4,intent(in) :: filetype logical*4 :: loadmasses integer*8 :: file_size integer*4 :: bytes_per_particle,i integer :: allocate_status ! guido debug integer(8) :: recl_test,flen_test,iolength_test integer :: error,hdferr,hdferr2,rank INTEGER(HID_T) :: file_id,dset_id,dataspace_id if (allocated(x)) deallocate(x) if (allocated(y)) deallocate(y) if (allocated(z)) deallocate(z) if (allocated(mass)) deallocate(mass) loadmasses = filetype<0 if (loadmasses) then bytes_per_particle = 16 else bytes_per_particle = 12 end if if (abs(filetype) == 2) then ! Simple binary file ! determine number of particles inquire(file=trim(filename), size=file_size) npart = file_size/int(bytes_per_particle,8) if (int(npart,8)*int(bytes_per_particle,8).ne.file_size) then write(*,'(A)') write(*,'(A)') 'Format of input file not recognized. Consider specifying a different format using -input.' stop end if ! load particles allocate(x(npart),y(npart),z(npart),mass(npart)) open(1,file=trim(filename),action='read',form='unformatted',status='old',access='stream') if (loadmasses) then read(1) (x(i),y(i),z(i),mass(i),i=1,npart) else read(1) (x(i),y(i),z(i),i=1,npart) mass = 1.0 end if close(1) else if (abs(filetype) == 3) then ! Simple ascii file ! determine number of particles npart = 0 open(1,file=trim(filename),action='read',form='formatted',status='old') stat = 0 do while (stat==0) read(1,*,IOSTAT=stat) xempty if (stat.ne.0) exit npart = npart+1 end do close(1) allocate(x(npart),y(npart),z(npart),mass(npart)) open(1,file=trim(filename),action='read',form='formatted',status='old') if (loadmasses) then do i = 1,npart read(1,*) x(i),y(i),z(i),mass(i) end do else do i = 1,npart read(1,*) x(i),y(i),z(i) end do mass = 1.0 end if close(1) else if (abs(filetype) == 4) then ! Gadget file binary write(*,*) "The record length is: ",recl_test if(.true.) then call h5open_f(error) call h5fopen_f(trim(filename)//'.hdf5',H5F_ACC_RDONLY_F,file_id,error) call h5dopen_f(file_id,'x',dset_id,error) call h5dget_space_f(dset_id,dataspace_id,hdferr) call h5sget_simple_extent_npoints_f(dataspace_id,npart,hdferr2) write(*,*) 'The number of particles is :',npart allocate(x(npart),y(npart),z(npart),mass(npart),stat=allocate_status) call H5LTread_dataset_float_f(dset_id,'x',x) call h5dclose_f(dset_id,error) call h5dopen_f(file_id,'y',dset_id,error) call H5LTread_dataset_float_f(dset_id,'y',y) !call h5dread_f(dset_id,H5T_IEEE_F32BE,y,npart,error) call h5dclose_f(dset_id,error) call h5dopen_f(file_id,'z',dset_id,error) call H5LTread_dataset_float_f(dset_id,'z',z) !call h5dread_f(dset_id,H5T_IEEE_F32BE,z,npart,error) call h5dclose_f(dset_id,error) call h5fclose_f(file_id,error) endif if(allocate_status /= 0) then write(*,'(A)') 'memory problem.!' else write(*,'(A)') 'memory is ok.' endif if (loadmasses) then read(1) (mass(i),i=1,npart) else mass = 1.0 end if !close(1) end if if (npart==huge(npart)) then write(*,'(A)') write(*,'(A)') 'No single file can contain more than 2^31 particles.' stop end if end subroutine load_single_file 2017-11-06 13:00 GMT-05:00 <hdf-forum-requ...@lists.hdfgroup.org<mailto:hdf-forum-requ...@lists.hdfgroup.org>>: Send Hdf-forum mailing list submissions to hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org> To subscribe or unsubscribe via the World Wide Web, visit http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org or, via email, send a message with subject or body 'help' to hdf-forum-requ...@lists.hdfgroup.org<mailto:hdf-forum-requ...@lists.hdfgroup.org> You can reach the person managing the list at hdf-forum-ow...@lists.hdfgroup.org<mailto:hdf-forum-ow...@lists.hdfgroup.org> When replying, please edit your Subject line so it is more specific than "Re: Contents of Hdf-forum digest..." Today's Topics: 1. Memory allocation/deallocation (Andreas Derler) 2. Re: no specific subroutine for the generic ?h5dread_f? (Scot Breitenfeld) ---------------------------------------------------------------------- Message: 1 Date: Mon, 6 Nov 2017 09:59:43 +0100 From: Andreas Derler <andreas.der...@wirecube.at<mailto:andreas.der...@wirecube.at>> To: hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>. Subject: [Hdf-forum] Memory allocation/deallocation Message-ID: <5e5c0745-3a23-a188-a535-9e662af49...@wirecube.at<mailto:5e5c0745-3a23-a188-a535-9e662af49...@wirecube.at>> Content-Type: text/plain; charset=utf-8; format=flowed Hi, I am trying to use the Java HDF5 interface (JHI5) in an application server environment, where I am writing to many different HDF5 files within a single JVM instance. However, while using HDF5 I am running into memory issues. Basically, I am facing the issue that writing to an HDF5 file causes memory to be allocated, which, even after successful writing and closing the file is not deallocated. So I would like to know if there is a way to clear all allocated memory after writing to the file using the JHI5 library. I already made sure that everything is closed and also tried to limit the cache size using H5Pset_cache and H5Pset_chunk_cache. However, changing cache sizes does not eliminate the problem, that the memory is not deallocated after closing the file. Also calling the function H5garbage_collect does not seem to change this behaviour. I saw in the docs that the native implementation provides the call H5Pset_evict_on_close (https://support.hdfgroup.org/HDF5/doc/RM/RM_H5P.html#Property-SetEvictOnClose), however, this call seems not to be available in the JHI5 version. Is there any other way to make sure that all memory is deallocated, or am I doing something wrong? To this end, I am posting an example code I am using: final long[] dims = { 0, 0 }; final long[] maxdims = { HDF5Constants.H5S_UNLIMITED, HDF5Constants.H5S_UNLIMITED }; final int RANK = 2; long cache_size = 1024L*1024; // cache size in bytes try { dims[0] = data.length; // num rows dims[1] = data[0].length; // num cols int file_id = H5.H5Pcreate(HDF5Constants.H5P_FILE_ACCESS); H5.H5Pset_cache(file_id, 0, 521L, cache_size, 1); file_id = H5.H5Fcreate(filename, HDF5Constants.H5F_ACC_TRUNC, HDF5Constants.H5P_DEFAULT, file_id); int dataspace_id = H5.H5Screate_simple(RANK, dims, maxdims); int dataset_access_property_list_id = H5.H5Pcreate(HDF5Constants.H5P_DATASET_ACCESS); H5.H5Pset_chunk_cache(dataset_access_property_list_id, 521L, cache_size, 1); int dataset_creation_property_list_id = H5.H5Pcreate(HDF5Constants.H5P_DATASET_CREATE); long[] dim_chunk = { dims[1], 1 }; H5.H5Pset_chunk(dataset_creation_property_list_id, RANK, dim_chunk); int dataset_id = H5.H5Dcreate(file_id, path, HDF5Constants.H5T_NATIVE_DOUBLE, dataspace_id, HDF5Constants.H5P_DEFAULT, dataset_creation_property_list_id, dataset_access_property_list_id); H5.H5Dwrite(dataset_id, HDF5Constants.H5T_NATIVE_DOUBLE, HDF5Constants.H5S_ALL, HDF5Constants.H5S_ALL, HDF5Constants.H5P_DEFAULT, data); H5.H5Fflush(dataset_id, HDF5Constants.H5F_SCOPE_GLOBAL); H5.H5Dclose(dataset_id); H5.H5Sclose(dataspace_id); H5.H5Pclose(dataset_creation_property_list_id); H5.H5Pclose(dataset_access_property_list_id); H5.H5Fclear_elink_file_cache(file_id); H5.H5Pclose(file_id); H5.H5Fclose(file_id); H5.H5garbage_collect(); } catch (final Exception e) { e.printStackTrace(); } ------------------------------ Message: 2 Date: Mon, 6 Nov 2017 14:49:44 +0000 From: Scot Breitenfeld <brtn...@hdfgroup.org<mailto:brtn...@hdfgroup.org>> To: HDF Users Discussion List <hdf-forum@lists.hdfgroup.org<mailto:hdf-forum@lists.hdfgroup.org>> Subject: Re: [Hdf-forum] no specific subroutine for the generic ?h5dread_f? Message-ID: <e3dc4b0f-7690-4408-b94e-8f12ce3fa...@hdfgroup.org<mailto:e3dc4b0f-7690-4408-b94e-8f12ce3fa...@hdfgroup.org>> Content-Type: text/plain; charset="utf-8" Can you include how you declared your arguments in h5dread_f? I would suspect that one of your arguments is wrong and the compiler is not finding the correct interface. Scot > On Nov 4, 2017, at 12:03 AM, Guido granda mu?oz > <guidogra...@gmail.com<mailto:guidogra...@gmail.com>> wrote: > > Hello, > > I am having trouble with a code that uses hdf5. The code is written in > fortran90 it consists of a main program(proccor.f90) and a > module(module_correlation_functions.f90). > > After using the makefile to compile the code, I get the following error: > > gfortran -O3 -c module_correlation_functions.f90 -I/usr/local/hdf5/include > -L/usr/local/hdf5/lib /usr/local/hdf5/lib/libhdf5hl_fortran.a > /usr/local/hdf5/lib/libhdf5_hl.a /usr/local/hdf5/lib/libhdf5_fortran.a > /usr/local/hdf5/lib/libhdf5.a -lz -ldl -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib > module_correlation_functions.f90:1583:68: > > call h5dread_f(dset_id,H5T_IEEE_F32BE,x,npart,error) > 1 > Error: There is no specific subroutine for the generic ?h5dread_f? at (1) > module_correlation_functions.f90:1588:68: > > call h5dread_f(dset_id,H5T_IEEE_F32BE,y,npart,error) > 1 > Error: There is no specific subroutine for the generic ?h5dread_f? at (1) > module_correlation_functions.f90:1593:68: > > call h5dread_f(dset_id,H5T_IEEE_F32BE,z,npart,error) > 1 > Error: There is no specific subroutine for the generic ?h5dread_f? at (1) > makefile:38: recipe for target 'module_correlation_functions.o' failed > make: *** [module_correlation_functions.o] Error 1 > > > The makelife I used to compile the code includes the location of the hdf5 > library : > > > LIBSHDF=-I/usr/local/hdf5/include -L/usr/local/hdf5/lib > /usr/local/hdf5/lib/libhdf5hl_fortran.a /usr/local/hdf5/lib/libhdf5_hl.a > /usr/local/hdf5/lib/libhdf5_fortran.a /usr/local/hdf5/lib/libhdf5.a -lz -ldl > -lm -Wl,-rpath -Wl,/usr/local/hdf5/lib > FCFLAGS = -O3 > # List of executables to be built within the package > PROGRAMS = procorr > > # "make" builds all > all: $(PROGRAMS) > > procorr.o: module_correlation_functions.o > procorr: module_correlation_functions.o > > # ====================================================================== > # And now the general rules, these should not require modification > # ====================================================================== > > # General rule for building prog from prog.o; $^ (GNU extension) is > # used in order to list additional object files on which the > # executable depends > %: %.o > $(FC) $(FCFLAGS) -o $@ $^ $(LIBSHDF) > > # General rules for building prog.o from prog.f90 or prog.F90; $< is > # used in order to list only the first prerequisite (the source file) > # and not the additional prerequisites such as module or include files > %.o: %.f90 > $(FC) $(FCFLAGS) -c $^ $(LIBSHDF) > > # Utility targets > .PHONY: clean veryclean > > clean: > rm -f *.o *.mod *.MOD > rm -f .last_fourier_transform > rm -f cdm_redshift0_* > rm -f *~ $(PROGRAMS) > > The call hdf5 statement is located on the module file > (module_correlation_functions.f90) > > I am probably doing something wrong on the makefile because I used the same > hdf5 library location to compile another fortran90+hdf5 code without any > trouble. Could please help me ? > > The gfortran version used is: GNU Fortran (Ubuntu 5.4.0-6ubuntu1~16.04.4) > 5.4.0 > and the hdf5 version is: hdf5-1.8.19 compiled with the enable fortran option. > > > Kind Regards, > > -- > Guido > _______________________________________________ > Hdf-forum is for HDF software users discussion. > Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org> > http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org > Twitter: https://twitter.com/hdf5 ------------------------------ Subject: Digest Footer _______________________________________________ Hdf-forum is for HDF software users discussion. Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org ------------------------------ End of Hdf-forum Digest, Vol 101, Issue 5 ***************************************** -- Guido _______________________________________________ Hdf-forum is for HDF software users discussion. Hdf-forum@lists.hdfgroup.org<mailto:Hdf-forum@lists.hdfgroup.org> http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org Twitter: https://twitter.com/hdf5
_______________________________________________ Hdf-forum is for HDF software users discussion. Hdf-forum@lists.hdfgroup.org http://lists.hdfgroup.org/mailman/listinfo/hdf-forum_lists.hdfgroup.org Twitter: https://twitter.com/hdf5