Hi All,
We have recently deployed a new CEPH cluster Octopus 15.2.4 which consists
of
12 OSD Nodes(16 Core + 200GB RAM, 30x14TB disks, CentOS 8)
3 Mon Nodes (8 Cores + 15GB, CentOS 8)
We use Erasure Coded Pool and RBD block devices.
3 Ceph clients use the RBD devices, each has 25 RBDs and Eac
increase the throughput ?
Thanks and regards,
Athreya
On Tue, Nov 10, 2020 at 7:10 PM Jason Dillaman wrote:
> On Tue, Nov 10, 2020 at 1:52 PM athreyavc wrote:
> >
> > Hi All,
> >
> > We have recently deployed a new CEPH cluster Octopus 15.2.4 which
> consists
>
Hi,
We have recently deployed a Ceph cluster with
12 OSD nodes(16 Core + 200GB RAM + 30 disks each of 14TB) Running CentOS 8
3 Monitoring Nodes (8 Core + 16GB RAM) Running CentOS 8
We are using Ceph Octopus and we are using RBD block devices.
We have three Ceph client nodes(16core + 30GB RAM,
>From different search results I read, disabling cephx can help.
Also https://static.linaro.org/connect/san19/presentations/san19-120.pdf
recommended some settings changes for the bluestore cache.
[osd]
bluestore cache autotune = 0
bluestore_cache_kv_ratio = 0.2
bluestore_cache_meta_ratio = 0.8
b
> ^south bridge +raid controller to disks ops and latency.
>
> -Edward Kalk
> Datacenter Virtualization
> Performance Engineering
> Socket Telecom
> Columbia, MO, USA
> ek...@socket.net
>
> > On Nov 12, 2020, at 4:45 AM, athreyavc wrote:
> >
> > Jumbo frame
or concern.
Thanks and regards,
Athreya
On Thu, Nov 12, 2020 at 1:30 PM athreyavc wrote:
> Hi,
>
> Thanks for the email, But we are not using RAID at all, we are using HBAs
> LSI HBA 9400-8e. Eash HDD is configured as an OSD.
>
> On Thu, Nov 12, 2020 at 12:19 PM Edward kalk
Hi,
I am trying to Re-Add a OSD after replacing the disk, I am running,
ceph-volume lvm create --bluestore --osd-id 41 --data
ceph-dm-41/block-dm-41
And I get,
--> AttributeError: module 'ceph_volume.api.lvm' has no attribute 'is_lv'
My Ceph Version is
"osd": {
"ceph version 15.2.9