[ceph-users] How does IOPS/latency scale for additional OSDs? (Intel S3610 SATA SSD, for block storage use case)

2019-10-22 Thread Victor Hooi
Hi, I'm running a 3-node Ceph cluster for VM block storage (Proxmox/KVM). Replication is set to 3. Previously, we were running 1 x Intel Optane 905P 960B disk p

Re: [ceph-users] How to create multiple Ceph pools, based on drive type/size/model etc?

2019-09-11 Thread Victor Hooi
Hi, Right - but what is you have two types of NVMe drives? I thought that there's only a fixed enum of device classes - hdd, ssd, or nvme. You can't add your own ones, right? Thanks, Victor On Thu, Sep 12, 2019 at 12:54 PM Konstantin Shalygin wrote: > I have a 3-node Ceph cluster, with a mix

[ceph-users] How to create multiple Ceph pools, based on drive type/size/model etc?

2019-09-11 Thread Victor Hooi
Hi, I have a 3-node Ceph cluster, with a mixture of Intel Optane 905P PCIe disks, and normal SATA SSD drives. I want to create two Ceph pools, one with only the Optane disks, and the other with only the SATA SSDs. When I checked "ceph osd tree", all the drives had device class "ssd". As a hack

[ceph-users] optane + 4x SSDs for VM disk images?

2019-08-11 Thread Victor Hooi
Hi I am building a 3-node Ceph cluster to storE VM disk images. We are running Ceph Nautilus with KVM. Each node has: Xeon 4116 512GB ram per node Optane 905p NVMe disk with 980 GB Previously, I was creating four OSDs per Optane disk, and using only Optane disks for all storage. However, if I

[ceph-users] Running ceph status as non-root user?

2019-03-15 Thread Victor Hooi
Hi, I'm attempting to setup Telegraf on a Proxmox machine to send Ceph information into InfluxDB . I had a few issues around permissions ( https://github.com/influxdata/telegraf/issues/5590), but

Re: [ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Victor Hooi
Hi, I have retested with 4K blocks - results are below. I am currently using 4 OSDs per Optane 900P drive. This was based on some posts I found on Proxmox Forums, and what seems to be "tribal knowledge" there. I also saw this presentation

Re: [ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Victor Hooi
t. > > Remember CEPH scales with the amount of physical disks you have, as you > only have 3 disks every piece of I/O is hitting all 3 disks, if you had 6 > disks for example and still did replication of 3 then only 50% of I/O would > be hitting each disks, therefore id expect to

[ceph-users] 3-node cluster with 3 x Intel Optane 900P - very low benchmarked performance (200 IOPS)?

2019-03-09 Thread Victor Hooi
Hi, I'm setting up a 3-node Proxmox cluster with Ceph as the shared storage, based around Intel Optane 900P drives (which are meant to be the bee's knees), and I'm seeing pretty low IOPS/bandwidth. - 3 nodes, each running a Ceph monitor daemon, and OSDs. - Node 1 has 48 GB of RAM and 10 cor