Good day
I'm currently decommissioning a cluster that runs EC3+1 (rack failure
domain - with 5 racks), however the cluster still has some production items
on it since I'm in the process of moving it to our new EC8+2 cluster.
Running Luminous 12.2.13 on Ubuntu 16 HWE, containerized with ceph-ansib
Good day
I currently have a problem where my octopus cluster shows cephfs EC free
space differently from my luminous cluster cephfs EC datapool. The only
difference I notice is the get application per pool.
Mounting the volume from a vm in production Luminous 12.2.13 EC3+1:
10.102.25.18:6789,10.1
Hi All
Is there perhaps any updated documentation about ceph OSD node optimised
sysctl configuration?
I'm seeing a lot of these:
$ netstat -s
...
4955341 packets pruned from receive queue because of socket buffer overrun
...
5866 times the listen queue of a socket overflowed
...
TCPB
Good day
We currently have 12 nodes in 4 Racks (3x4) and getting another 3 nodes to
complete the 5th rack on Version 12.2.12, using ceph-ansible & docker
containers.
With the 3 new nodes (1 rack bucket) we would like to make use of a
non-containerised setup since our long term plan is to complete
nc wrote:
> On Tue, Oct 15, 2019 at 2:42 AM Jeremi Avenant wrote:
>
>> Good day
>>
>> I'm currently administrating a Ceph cluster that consists out of HDDs &
>> SSDs. The rule for cephfs_data (ec) is to write to both these drive
>> classifications (HDD+S
Good day
I'm currently administrating a Ceph cluster that consists out of HDDs &
SSDs. The rule for cephfs_data (ec) is to write to both these drive
classifications (HDD+SSD). I would like to change it so that
cephfs_metadata (non-ec) writes to SSD & cephfs_data (erasure encoded "ec")
writes to HD
Good morning
Q: Is it possible to have a 2nd cephfs_data volume and exposing it to the
same openstack environment?
Reason being:
Our current profile is configured with erasure code value of k=3,m=1 (rack
level) but we looking to buy another +- 6PB of storage w/ controllers and
was thinking of mo