Hi Francois,
> For the mpu's it is less important as I can fix them with some scripts.
Would you mind sharing how you get rid of these left-over mpu objects?
I’ve been trying to get rid of them without much success.
The "radosgw-admin bucket check --bucket --fix --check-objects” I tried, but it
Hi,
is it possible to get tenant and user id with some python boto3 request?
Kind regards,
Rok
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi everyone,
We'd like to understand how many users are using cache tiering and in
which release.
The cache tiering code is not actively maintained, and there are known
performance issues with using it (documented in
https://docs.ceph.com/en/latest/rados/operations/cache-tiering/#a-word-of-caution
On a related note, Intel will be presenting about their Open CAS
software that provides caching at the block layer under the OSD at the
weekly performance meeting on 2/24/2022 (similar to dm-cache, but with
differences regarding the implementation). This isn't a replacement for
cache tiering,
Hi,
we've noticed the warnings for quite some time now, but we're big fans
of the cache tier. :-)
IIRC we set it up some time around 2015 or 2016 for our production
openstack environment and it works nicely for us. We tried it without
the cache some time after we switched to Nautilus but th
Hi Eugen,
Thanks for the great feedback. Is there anything specific about the
cache tier itself that you like vs hypothetically having caching live
below the OSDs? There are some real advantages to the cache tier
concept, but eviction over the network has definitely been one of the
tougher
There’s nothing special about our setup really. I’m also open to test
any alternative if it improves our user experience. So it would
probably make sense to check out the performance meeting you
mentioned. :-)
Zitat von Mark Nelson :
Hi Eugen,
Thanks for the great feedback. Is there an
Is there anything useful in the rgw daemon's logs? (e.g. journalctl -xeu
ceph-35194656-893e-11ec-85c8-005056870dae@rgw.obj0.c01.gpqshk)
- Adam King
On Wed, Feb 16, 2022 at 3:58 PM Ron Gage wrote:
> Hi everyone!
>
>
>
> Looks like I am having some problems with some of my ceph RGW daemons -
> t
Can you retry after resetting the systemd unit? The message "Start
request repeated too quickly." should be cleared first, then start it
again:
systemctl reset-failed
ceph-35194656-893e-11ec-85c8-005056870dae@rgw.obj0.c01.gpqshk.service
systemctl start
ceph-35194656-893e-11ec-85c8-0050568