Unless I'm misunderstanding your situation, you could also tag your
placement targets. You then tag users with the corresponding tag
enabling them to create new buckets at that placement target. If a user
is not tagged with the corresponding tag they cannot create new buckets
at that placement
Which ceph version is this? I'm trying to understand how removing a
pool leaves the PGs of that pool... Do you have any logs or something
from when you removed the pool?
We'll have to deal with a cache tier in the forseeable future as well
so this is quite relevant for us as well. Maybe I'll
I don't see sensible output for the commands:
# ls -ld //volumes/subvolgrp/test
# ls -l //volumes/subvolgrp/test/.snap
please remember to replace / with the path to the mount
point on your system
I'm presuming / is the path where you have mounted the
root dir of your cephfs filesystem
On Thu, Oct
@Eugen
We have seen the same problems 8 years ago. I can only recommend never
to use cache tiering in production.
At Cephalocon this was part of my talk and as far as I remember cache
tiering will also disappear from ceph soon.
Cache tiering has been deprecated in the Reef release as it has l
I know, I know... but since we are already using it (for years) I have
to check how to remove it safely, maybe as long as we're on Pacific. ;-)
Zitat von Joachim Kraftmayer - ceph ambassador :
@Eugen
We have seen the same problems 8 years ago. I can only recommend
never to use cache tierin
Hello Eugen, Hello Joachim,
@Joachim: Interesting! And you got empty PGs, too? How did you solve the
problem?
@Eugen: This is one of our biggest clusters and we're in the process to
migrate from Nautilus to Octopus and to migrate from CentOS to Ubuntu.
The cache tier pool's OSDs were still
Do your current Crush rules for your pools apply to the new OSD map
with those 4 nodes? If you have e.g. ec 4+2 in 8 node cluster and now
you have 4 nodes you went less than your min size, please check
Στις Πέμ 28 Σεπ 2023 στις 9:24 μ.μ., ο/η έγραψε:
>
> I have an 8-node cluster with old hardware
Maybe the mismatching OSD versions had an impact on the unclean tier
removal, but this is just a guess. I couldn't reproduce it in a
Pacific test cluster, the removal worked fine without leaving behind
empty PGs. But I had only a few rbd images in that pool so it's not
really representative
Hi,
I strongly agree with Joachim, I usually disable the autoscaler in
production environments. But the devs would probably appreciate bug
reports to improve it.
Zitat von Boris Behrens :
Hi,
I've just upgraded to our object storages to the latest pacific version
(16.2.14) and the autscal
I usually set it to warn, so I don't forget to check from time to time :)
Am Do., 5. Okt. 2023 um 12:24 Uhr schrieb Eugen Block :
> Hi,
>
> I strongly agree with Joachim, I usually disable the autoscaler in
> production environments. But the devs would probably appreciate bug
> reports to improve
thanks Tobias, i see that https://github.com/ceph/ceph/pull/53414 had
a ton of test failures that don't look related. i'm working with Yuri
to reschedule them
On Thu, Oct 5, 2023 at 2:05 AM Tobias Urdin wrote:
>
> Hello Yuri,
>
> On the RGW side I would very much like to get this [1] patch in tha
Hi
I have been using Ceph for many years now, and recently upgraded to Reef.
Seems I made the jump too quickly, as I have been hitting a few issues. I can't
find any mention of them in the bug reports. I thought I would share them here
in case it is something to do with my setup.
On V18.2.0
c
Hi,
tl;dr why are my osds still spilling?
I've recently upgraded to 16.2.14 from 16.2.9 and started receiving bluefs
spillover warnings (due to the "fix spillover alert" per the 16.2.14
release notes). E.g. from 'ceph health detail', the warning on one of
these (there are a few):
osd.76 spi
On Fri, Oct 06, 2023 at 02:55:22PM +1100, Chris Dunlop wrote:
Hi,
tl;dr why are my osds still spilling?
I've recently upgraded to 16.2.14 from 16.2.9 and started receiving
bluefs spillover warnings (due to the "fix spillover alert" per the
16.2.14 release notes). E.g. from 'ceph health detail
On Thu, Oct 05, 2023 at 09:22:29AM +0200, Robert Hish wrote:
> Unless I'm misunderstanding your situation, you could also tag your
> placement targets. You then tag users with the corresponding tag enabling
> them to create new buckets at that placement target. If a user is not tagged
> with the co
Hello Matthias,
In our setup we have a set of users that are only use to read from certain
buckets (they have s3:GetObject set in the bucket policy).
When we create those read users using the Admin Ops API we add the
max-buckets=-1 parameter which disables bucket creation.
https://docs.ceph.co
16 matches
Mail list logo