** Description changed:

  [ Impact ]
  
   * Users experience brick SEGFAULTs under certain not-yet-understood
  scenarios. Some reports include a high percentage of small file I/O. I
  encountered the issue roughly every hour with Minio backed by GlusterFS
  on ZFS.
  
   * This bug introduces an increased risk of data loss or corruption
  depending on the user's configuration and timing of brick crashes.
  
   * Core dumps from multiple users revealed that the SEGFAULTs are caused
  by a stack overflow when namespaced inodes are destroyed.
  
   * The patch removes the recursive call to inode_unref when a namespaced
  inode is destroyed.
  
  [ Test Plan ]
  
  * I experienced brick crashes on specific volumes about once per hour.
  On my system, this issue only impacted a locally mounted volume backing
  a Minio instance (an S3 API compatible server) used by Restic clients
  (an incremental backup system with lots of small file creations and
  deletions). Other volumes served with NFS Ganesha with primarily large
  file random access never triggered it.
  
  * I attempted to replicate the workload by running various file system
  benchmarking tools within their own user namespace (i.e. lots of small
  file creations and deletion) but was not able to replicate the crash.
  
  * I've been running the proposed patch since 2024-05-06 and haven't
  experienced a single crash.
  
- Therefore, the test plan is to run the packages from proposed for at
- least a day, under the same load as when the bug happened, and confirm
- that the crashes reported in this bug no longer happen.
+ * The test plan is to run the packages from proposed for at least a day,
+ under the same load as when the bug happened, and confirm that the
+ crashes reported in this bug no longer happen.
  
  [ Where problems could occur ]
  
   * It's conceivable that this patch introduces undesired behavior when
  inodes are destroyed, however I highly doubt this scenario as
  __inode_destroy was not recursive before the change which introduced the
  bug.
  
  [ Other Info ]
  
   * PR which introduced the bug: https://github.com/gluster/glusterfs/pull/1763
   * PR which added this patch: https://github.com/gluster/glusterfs/pull/4302
   * Issue discussion: https://github.com/gluster/glusterfs/issues/4295

** Description changed:

  [ Impact ]
  
   * Users experience brick SEGFAULTs under certain not-yet-understood
  scenarios. Some reports include a high percentage of small file I/O. I
  encountered the issue roughly every hour with Minio backed by GlusterFS
  on ZFS.
  
   * This bug introduces an increased risk of data loss or corruption
  depending on the user's configuration and timing of brick crashes.
  
   * Core dumps from multiple users revealed that the SEGFAULTs are caused
  by a stack overflow when namespaced inodes are destroyed.
  
   * The patch removes the recursive call to inode_unref when a namespaced
  inode is destroyed.
  
  [ Test Plan ]
  
- * I experienced brick crashes on specific volumes about once per hour.
+  * I experienced brick crashes on specific volumes about once per hour.
  On my system, this issue only impacted a locally mounted volume backing
  a Minio instance (an S3 API compatible server) used by Restic clients
  (an incremental backup system with lots of small file creations and
  deletions). Other volumes served with NFS Ganesha with primarily large
  file random access never triggered it.
  
- * I attempted to replicate the workload by running various file system
+  * I attempted to replicate the workload by running various file system
  benchmarking tools within their own user namespace (i.e. lots of small
  file creations and deletion) but was not able to replicate the crash.
  
- * I've been running the proposed patch since 2024-05-06 and haven't
+  * I've been running the proposed patch since 2024-05-06 and haven't
  experienced a single crash.
  
- * The test plan is to run the packages from proposed for at least a day,
- under the same load as when the bug happened, and confirm that the
+  * The test plan is to run the packages from proposed for at least a
+ day, under the same load as when the bug happened, and confirm that the
  crashes reported in this bug no longer happen.
  
  [ Where problems could occur ]
  
   * It's conceivable that this patch introduces undesired behavior when
  inodes are destroyed, however I highly doubt this scenario as
  __inode_destroy was not recursive before the change which introduced the
  bug.
  
  [ Other Info ]
  
   * PR which introduced the bug: https://github.com/gluster/glusterfs/pull/1763
   * PR which added this patch: https://github.com/gluster/glusterfs/pull/4302
   * Issue discussion: https://github.com/gluster/glusterfs/issues/4295

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/2064843

Title:
  Brick SEGFAULTs in 11.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/glusterfs/+bug/2064843/+subscriptions


-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to