Currently I am checking usage on ssd drives with
ceph osd df| egrep 'CLASS|ssd'
I have a use % between 48% and 57%, and assume that with a node failure 1/3
(only using 3x repl.) of this 57% needs to be able to migrate and added to a
different node.
Is there a better way of checking this (on
Thank you for the information, Christian. When you reshard the bucket id is
updated (with most recent versions of ceph, a generation number is
incremented). The first bucket id matches the bucket marker, but after the
first reshard they diverge.
The bucket id is in the names of the currently us
There are a couple of potential explanations. 1) Do you have versioning turned
on? 1a) And do you write the same file over and over, such as a heartbeat file?
2) Do you have lots of incomplete multipart uploads?
If you wouldn’t mind, please run: `radosgw-admin bi list —bucket=epbucket
--max-ent
Not currently. Those logs aren't generated by any daemons, they come
directly from anything done by the cephadm binary one the host, which tends
to be quite a bit since the cephadm mgr module runs most of its operations
on the host through a copy of the cephadm binary. It doesn't log to journal
bec
This is the third and possibly last release candidate for Reef.
The Reef release comes with a new RockDB version (7.9.2) [0], which
incorporates several performance improvements and features. Our
internal testing doesn't show any side effects from the new version,
but we are very eager to hear com
Hi,
Quick question about cephadm and its logs. On my cluster I have every logs that
goes to journald. But on each machine, I still have /var/log/ceph/cephadm.log
that is alive.
Is there a way to make cephadm log to journald instead of a file? If yes did I
miss it on the documentation? Of if no
Hello Everyone ,
I am getting [WRN] LARGE_OMAP_OBJECTS: 18 large omap objects warning
in one of my clusters . I see one of the buckets has a huge number of
shards 1999 and "num_objects": 221185360 when I check bucket stats using
radosgw-admin bucket stats . However I see only 8 files when I
Hi,
I think we found an explanation for the behaviour, we still need to
verify it though. Just wanted to write it up for posterity.
We already knew that the large number of "purged_snap" keys in the mon
store is responsible for the long synchronization. Removing them
didn't seem to have a n
On 7/26/23 22:13, Frank Schilder wrote:
Hi Xiubo.
... I am more interested in the kclient side logs. Just want to
know why that oldest request got stuck so long.
I'm afraid I'm a bad admin in this case. I don't have logs from the host any
more, I would have needed the output of dmesg and thi