[ceph-users] Intepreting reason for blocked request

2018-05-12 Thread Bryan Henderson
I recently had some requests blocked indefinitely; I eventually cleared it up by recycling the OSDs, but I'd like some help interpreting the log messages that supposedly give clue as to what caused the blockage: (I reformatted for easy email reading) 2018-05-03 01:56:35.248623 osd.0 192.168.1.16:

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-12 Thread Oliver Schulz
Thanks! On 12.05.2018 21:17, David Turner wrote: I would suggest 2GB partitions for WAL partitions and 150GB osds to make an SSD only pool for the fs metadata pool. I know that doesn't use the whole disk, but there's no need or reason to. By under-provisioning the nvme it just adds that much m

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-12 Thread David Turner
I would suggest 2GB partitions for WAL partitions and 150GB osds to make an SSD only pool for the fs metadata pool. I know that doesn't use the whole disk, but there's no need or reason to. By under-provisioning the nvme it just adds that much more longevity to the life of the drive. You cannot ch

Re: [ceph-users] Shared WAL/DB device partition for multiple OSDs?

2018-05-12 Thread Oliver Schulz
Dear David, On 11.05.2018 22:10, David Turner wrote: For if you should do WAL only on the NVMe vs use a filestore journal, that depends on your write patterns, use case, etc. we mostly use CephFS, for scientific data processing. It's mainly larger files (10 MB to 10 GB, but sometimes also a bu

Re: [ceph-users] Bucket reporting content inconsistently

2018-05-12 Thread Tom W
Thanks for posting this for me Sean. Just to update, it seems that despite the bucket checks completing and reporting no issues, the objects continued to show in any tools to list the contents of the bucket. I put together a simple loop to upload a new file to overwrite the existing one then tr