[gmx-users] Re: Segmentation fault, mdrun_mpi

2012-11-13 Thread Taudt
Hi, I got a similar error for my system: ... [n020110:27321] *** Process received signal *** [n020110:27321] Signal: Segmentation fault (11) [n020110:27321] Signal code: (128) [n020110:27321] Failing at address: (nil) [n020110:27321] [ 0] /lib64/libpthread.so.0 [0x38bac0eb70] [n020110:27321] [ 1]

Re: [gmx-users] Re: Segmentation fault, mdrun_mpi

2012-10-10 Thread Justin Lemkul
On 10/10/12 1:33 PM, Ladasky wrote: Update: Ladasky wrote Justin Lemkul wrote Random segmentation faults are really hard to debug. Can you resume the run using a checkpoint file? That would suggest maybe an MPI problem or something else external to Gromacs. Without a reproducible system

[gmx-users] Re: Segmentation fault, mdrun_mpi

2012-10-10 Thread Ladasky
Update: Ladasky wrote > > Justin Lemkul wrote >> Random segmentation faults are really hard to debug. Can you resume the >> run >> using a checkpoint file? That would suggest maybe an MPI problem or >> something >> else external to Gromacs. Without a reproducible system and a debugging >>

Re: [gmx-users] Re: Segmentation fault, mdrun_mpi

2012-10-08 Thread Justin Lemkul
On 10/8/12 4:39 AM, Ladasky wrote: Justin Lemkul wrote My first guess would be a buggy MPI implementation. I can't comment on hardware specs, but usually the random failures seen in mdrun_mpi are a result of some generic MPI failure. What MPI are you using? I am using the OpenMPI package,

[gmx-users] Re: Segmentation fault, mdrun_mpi

2012-10-08 Thread Ladasky
Justin Lemkul wrote > My first guess would be a buggy MPI implementation. I can't comment on > hardware > specs, but usually the random failures seen in mdrun_mpi are a result of > some > generic MPI failure. What MPI are you using? I am using the OpenMPI package, version 1.4.3. It's one of t

Re: [gmx-users] Re: Segmentation fault, mdrun_mpi

2012-10-07 Thread Justin Lemkul
On 10/7/12 2:15 PM, Ladasky wrote: Justin Lemkul wrote Random segmentation faults are really hard to debug. Can you resume the run using a checkpoint file? That would suggest maybe an MPI problem or something else external to Gromacs. Without a reproducible system and a debugging backtrace,

[gmx-users] Re: Segmentation fault, mdrun_mpi

2012-10-07 Thread Ladasky
Justin Lemkul wrote > Random segmentation faults are really hard to debug. Can you resume the > run > using a checkpoint file? That would suggest maybe an MPI problem or > something > else external to Gromacs. Without a reproducible system and a debugging > backtrace, it's going to be hard to

Re: [gmx-users] Re: Segmentation fault, mdrun_mpi

2012-10-05 Thread Justin Lemkul
On 10/5/12 3:03 PM, Ladasky wrote: Bumping this once before the weekend, hoping to get some help. I am getting segmentation fault errors at 1 to 2 million cycles into my production MD runs, using GROMACS 4.5.4. If these errors are a consequence of a poorly-equilibrated system, I am no longer

[gmx-users] Re: Segmentation fault, mdrun_mpi

2012-10-05 Thread Ladasky
Bumping this once before the weekend, hoping to get some help. I am getting segmentation fault errors at 1 to 2 million cycles into my production MD runs, using GROMACS 4.5.4. If these errors are a consequence of a poorly-equilibrated system, I am no longer getting the right kind of error message