[ceph-users] 2 Pgs (1x inconsistent, 1x unfound / degraded - unable to fix

2021-03-09 Thread Jeremi Avenant
Good day I'm currently decommissioning a cluster that runs EC3+1 (rack failure domain - with 5 racks), however the cluster still has some production items on it since I'm in the process of moving it to our new EC8+2 cluster. Running Luminous 12.2.13 on Ubuntu 16 HWE, containerized with ceph-ansib

[ceph-users] Erasure Space not showing on Octopus

2020-12-18 Thread Jeremi Avenant
Good day I currently have a problem where my octopus cluster shows cephfs EC free space differently from my luminous cluster cephfs EC datapool. The only difference I notice is the get application per pool. Mounting the volume from a vm in production Luminous 12.2.13 EC3+1: 10.102.25.18:6789,10.1

[ceph-users] ceph OSD node optimised sysctl configuration

2020-07-20 Thread Jeremi Avenant
Hi All Is there perhaps any updated documentation about ceph OSD node optimised sysctl configuration? I'm seeing a lot of these: $ netstat -s ... 4955341 packets pruned from receive queue because of socket buffer overrun ... 5866 times the listen queue of a socket overflowed ... TCPB

[ceph-users] Adding new non-containerised hosts to current contanerised environment and moving away from containers forward

2019-11-11 Thread Jeremi Avenant
Good day We currently have 12 nodes in 4 Racks (3x4) and getting another 3 nodes to complete the 5th rack on Version 12.2.12, using ceph-ansible & docker containers. With the 3 new nodes (1 rack bucket) we would like to make use of a non-containerised setup since our long term plan is to complete

[ceph-users] Re: Dealing with changing EC Rules with drive classifications

2019-10-16 Thread Jeremi Avenant
nc wrote: > On Tue, Oct 15, 2019 at 2:42 AM Jeremi Avenant wrote: > >> Good day >> >> I'm currently administrating a Ceph cluster that consists out of HDDs & >> SSDs. The rule for cephfs_data (ec) is to write to both these drive >> classifications (HDD+S

[ceph-users] Dealing with changing EC Rules with drive classifications

2019-10-15 Thread Jeremi Avenant
Good day I'm currently administrating a Ceph cluster that consists out of HDDs & SSDs. The rule for cephfs_data (ec) is to write to both these drive classifications (HDD+SSD). I would like to change it so that cephfs_metadata (non-ec) writes to SSD & cephfs_data (erasure encoded "ec") writes to HD

[ceph-users] Is it possible to have a 2nd cephfs_data volume? [Openstack]

2019-10-09 Thread Jeremi Avenant
Good morning Q: Is it possible to have a 2nd cephfs_data volume and exposing it to the same openstack environment? Reason being: Our current profile is configured with erasure code value of k=3,m=1 (rack level) but we looking to buy another +- 6PB of storage w/ controllers and was thinking of mo