We run a mix of Samsung and Intel SSD's, our solution was to write a
script that parses the output of the Samsung SSD Toolkit and Intel
ISDCT CLI tools respectively. In our case, we expose those metrics
using node_exporter's textfile collector for ingestion by prometheus.
It's mostly the same smart
"As of August 2021, new container images are pushed to quay.io
registry only. Docker hub won't receive new content for that specific
image but current images remain available.As of August 2021, new
container images are pushed to quay.io registry only. Docker hub won't
receive new content for that s
I've always been curious about this. Does anyone have any experience
spanning an LVM VG over multiple RBDs?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I should have started out by asking how is the RBD mounted? Directly
on a host of through a hypervisor like KVM?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
What driver did you use to mount the volumes? I believe only
virtio-scsi supports fstrim commands.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello,
Bumping this in hopes that someone can shed some light on this. I've tried
to find details on these metrics but I've come up empty handed.
Thank you,
John
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-use
Hello,
I was hoping someone could clear up the difference between these metrics.
In filestore the difference between Apply and Commit Latency was pretty
clear and these metrics gave a good representation of how the cluster was
performing. High commit usually meant our journals were performing poor
You need to increase the pg number (number of placement groups) before you
increase the pgp number (number of placement groups for placement).
The former creates the pg's and the latter activates them and triggers a
rebalance.
___
ceph-users mailing list
I had similar concerns when we moved to bluestore. We run filestore
clusters with HDD OSD's and SSD journals and I was worried bluestore
wouldn't perform as well with the change from journals to WAL. As it
turns out our bluestore clusters outperform our filestore clusters in
all regards, latency, t