I can confirm that this is a failure in ceph 14.2.4 dashboard - as i am seeing
this also when i check the free spare under "pools"
Am 8. Oktober 2019 07:54:58 MESZ schrieb "Yordan Yordanov (Innologica)"
:
>Hi Igor,
>
>Thank you for responding. In this case this looks like a breaking
>change. I k
Hi Igor,
Thank you for responding. In this case this looks like a breaking change. I
know of two applications that are now incorrectly displaying the pool usage and
capacity, It looks like they both rely on the USED field to be divided by the
number of replicas. One of those application is actu
I think this might be related to a problem I'm having with "ceph osd
pool autoscale-status". SIZE appears to be raw usage (data * 3 in our
case) while TARGET SIZE seems to be expecting user-facing size. For
example, I have a 87TiB dataset that I'm currently copying into a
CephFS. "du -sh" shows tha
Hi Yordan,
this is mimic documentation and these snippets aren't valid for Nautilus
any more. They are still present in Nautilus pages though..
Going to create a corresponding ticket to fix that.
Relevant Nautilus changes for 'ceph df [detail]' command can be found in
Nautilus release note
The documentation states:
https://docs.ceph.com/docs/mimic/rados/operations/monitoring/
The POOLS section of the output provides a list of pools and the notional usage
of each pool. The output from this section DOES NOT reflect replicas, clones or
snapshots. For example, if you store an object w
On 9/25/19 3:22 PM, nalexand...@innologica.com wrote:
> Hi everyone,
>
> We are running Nautilus 14.2.2 with 6 nodes and a total of 44 OSDs, all are
> 2TB spinning disks.
> # ceph osd count-metadata osd_objectstore
> "bluestore": 44
> # ceph osd pool get one size
> size: 3
> # ceph df
> R