[ceph-users] precise/best way to check ssd usage

2023-07-28 Thread Marc
Currently I am checking usage on ssd drives with ceph osd df| egrep 'CLASS|ssd' I have a use % between 48% and 57%, and assume that with a node failure 1/3 (only using 3x repl.) of this 57% needs to be able to migrate and added to a different node. Is there a better way of checking this (on

[ceph-users] Re: Not all Bucket Shards being used

2023-07-28 Thread J. Eric Ivancich
Thank you for the information, Christian. When you reshard the bucket id is updated (with most recent versions of ceph, a generation number is incremented). The first bucket id matches the bucket marker, but after the first reshard they diverge. The bucket id is in the names of the currently us

[ceph-users] Re: LARGE_OMAP_OBJECTS warning and bucket has lot of unknown objects and 1999 shards.

2023-07-28 Thread J. Eric Ivancich
There are a couple of potential explanations. 1) Do you have versioning turned on? 1a) And do you write the same file over and over, such as a heartbeat file? 2) Do you have lots of incomplete multipart uploads? If you wouldn’t mind, please run: `radosgw-admin bi list —bucket=epbucket --max-ent

[ceph-users] Re: cephadm logs

2023-07-28 Thread Adam King
Not currently. Those logs aren't generated by any daemons, they come directly from anything done by the cephadm binary one the host, which tends to be quite a bit since the cephadm mgr module runs most of its operations on the host through a copy of the cephadm binary. It doesn't log to journal bec

[ceph-users] Reef release candidate - v18.1.3

2023-07-28 Thread Yuri Weinstein
This is the third and possibly last release candidate for Reef. The Reef release comes with a new RockDB version (7.9.2) [0], which incorporates several performance improvements and features. Our internal testing doesn't show any side effects from the new version, but we are very eager to hear com

[ceph-users] cephadm logs

2023-07-28 Thread Luis Domingues
Hi, Quick question about cephadm and its logs. On my cluster I have every logs that goes to journald. But on each machine, I still have /var/log/ceph/cephadm.log that is alive. Is there a way to make cephadm log to journald instead of a file? If yes did I miss it on the documentation? Of if no

[ceph-users] LARGE_OMAP_OBJECTS warning and bucket has lot of unknown objects and 1999 shards.

2023-07-28 Thread Uday Bhaskar Jalagam
Hello Everyone , I am getting [WRN] LARGE_OMAP_OBJECTS: 18 large omap objects warning in one of my clusters . I see one of the buckets has a huge number of shards 1999 and "num_objects": 221185360 when I check bucket stats using radosgw-admin bucket stats . However I see only 8 files when I

[ceph-users] Re: MON sync time depends on outage duration

2023-07-28 Thread Eugen Block
Hi, I think we found an explanation for the behaviour, we still need to verify it though. Just wanted to write it up for posterity. We already knew that the large number of "purged_snap" keys in the mon store is responsible for the long synchronization. Removing them didn't seem to have a n

[ceph-users] Re: MDS stuck in rejoin

2023-07-28 Thread Xiubo Li
On 7/26/23 22:13, Frank Schilder wrote: Hi Xiubo. ... I am more interested in the kclient side logs. Just want to know why that oldest request got stuck so long. I'm afraid I'm a bad admin in this case. I don't have logs from the host any more, I would have needed the output of dmesg and thi