I see that you used 3 GPU and 3 MPI as you posted commands on GitHub as
below
command : which relion_refine_mpi --continue
Refine3D/job006/run_it000_optimiser.star --o Refine3D/job008/run
--dont_combine_weights_via_disc --no_parallel_disc_io --preread_images
--pool 3 --pad 1 --particle_diameter 16
h are failing. Cryosparc is not a problem either.
> Thanks
> Dhiraj
> From: Takanori Nakane
> Sent: Friday, December 22, 2023 5:35 PM
> To: Srivastava, Dhiraj
> Cc: CCP4BB@JISCMAIL.AC.UK
> Subject: [External] Re: [ccp4bb] Relion issue with MPI
>
> Hi,
>
>
, December 22, 2023 5:35 PM
To: Srivastava, Dhiraj
Cc: CCP4BB@JISCMAIL.AC.UK
Subject: [External] Re: [ccp4bb] Relion issue with MPI
Hi,
First of all, please report details of your hardware and your job.
- Type of GPU
- Number of GPU
- GPU memory size
- Box size
- Number of threads
- Number of
Hi,
First of all, please report details of your hardware and your job.
- Type of GPU
- Number of GPU
- GPU memory size
- Box size
- Number of threads
- Number of MPI processes
- Full command line
Do you get the same error in ALL datasets (including our
tutorial dataset) or only on this particul
Hi
I am trying to use relion and I am getting error when trying to use mpi (for 3d
classification and 3D auto-refine).
ERROR: out of memory in
/home/lvantol/relion5/relion/src/acc/cuda/custom_allocator.cuh at line 436
(error-code 2)
in: /home/lvantol/relion5/relion/src/acc/cuda/cuda_settings.