. I can see that an OSD is listed in the output on the last line.
> 3. I restart that OSD
> 4. Delete the rbd
>
> I'm not sure if it is the same thing but it doesn't hurt to try.
>
>
> On Fri, Jun 13, 2025 at 4:34 AM Gaël THEROND
> wrote:
>
>> Hi Enri
Hi there! Nope it is not in the trash pool.
Le jeu. 5 juin 2025 à 12:37, Eugen Block a écrit :
> Is that image in the trash?
>
> `rbd -p pool trash ls`
>
> Zitat von Gaël THEROND :
>
> > Hi folks,
> >
> > I've a quick question. On one of our pool we
Hi folks,
I've a quick question. On one of our pool we found out an image that
doesn't exist anymore physically (This image doesn't exist, have no snap
attached, is not parent of another image) but is still listed when
performing a `rbd -p pool ls`. However, it error with a nice "Error opening
ima
> There is also a parent issue 56029 you might want to look in.
>
> If you hit one of those bugs, an upgrade to the version containing a fix
> might help.
>
> Good luck,
> Vladimir.
>
> Get Outlook for Android <https://aka.ms/AAb9ysg>
> ---
To back Bartosz with this issue, I've pretty much the same issue with
Pacific 16.2.13 and I think the whole root cause for both Bartosz's issue
and mine is the same.
Each time we call for an object, the radosGW nodes answer with a 404
NoSuchKey error 50% of time (on a 3 node gateway) and it's 100%
Is there anyone using containerized CEPH over CentOS Stream 9 Hosts already?
I think there is a pretty big issue in here if CEPH images are built over
CentOS but never tested against it.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe se
Hi team,
I’m experimenting a bit CentOS Stream 9 on our infrastructure as we’re
migrating away from CentOS Stream 8.
As our deployment model is an hyperconverged one, I have CEPH and OPENSTACK
running on the same hosts (OSDs+NOVA/CINDER).
That prohibits me to let CEPH nodes running on CentOS Str
Hi team,
I’m experimenting a bit CentOS Stream 9 on our infrastructure as we’re
migrating away from CentOS Stream 8.
As our deployment model is an hyperconverged one, I have CEPH and OPENSTACK
running on the same hosts (OSDs+NOVA/CINDER).
That prohibits me to let CEPH nodes running on CentOS Str
I found it a bit confusing in the docs because you can't change the EC
> profile of a pool due to k and m numbers and the crush rule is defined
> in the profile as well, but you can change that outside of the
> profile.
>
> Regards,
> Rich
>
> On Mon, 24 Apr 2023 at 20
.
What kind of policy should I write to do that ??
Is this procedure something that looks ok to you?
Kind regards!
Le mer. 19 avr. 2023 à 14:49, Casey Bodley a écrit :
> On Wed, Apr 19, 2023 at 5:13 AM Gaël THEROND
> wrote:
> >
> > Hi everyone, quick question regarding rado
Hi everyone, quick question regarding radosgw zone data-pool.
I’m currently planning to migrate an old data-pool that was created with
inappropriate failure-domain to a newly created pool with appropriate
failure-domain.
If I’m doing something like:
radosgw-admin zone modify —rgw-zone default —da
Hi everyone, quick question regarding radosgw zone data-pool.
I’m currently planning to migrate an old data-pool that was created with
inappropriate failure-domain to a newly created pool with appropriate
failure-domain.
If I’m doing something like:
radosgw-admin zone modify —rgw-zone default —da
;s this:
> radosgw-admin metadata get bucket:{bucket_name}
> or
> radosgw-admin metadata get bucket.instance:{bucket_name}:{instance_id}
>
> Hopefully that helps you or someone else struggling with this.
>
> Rich
>
> On Wed, 15 Mar 2023 at 07:18, Gaël THEROND
> wrote:
ect (chunks if EC) that are
> smaller than the min_alloc size? This cheat sheet might help:
>
>
> https://docs.google.com/spreadsheets/d/1rpGfScgG-GLoIGMJWDixEkqs-On9w8nAUToPQjN8bDI/edit?usp=sharing
>
> Mark
>
> On 3/14/23 12:34, Gaël THEROND wrote:
> > Hi everyone, I’ve got
get key: (22) Invalid argument
Ok, fine for the api, I’ll deal with the s3 api.
Even if a radosgw-admin bucket flush version —keep-current or something
similar would be much appreciated xD
Le mar. 14 mars 2023 à 19:07, Robin H. Johnson a
écrit :
> On Tue, Mar 14, 2023 at 06:59:51PM +0100, G
radosgw-admin?
If not I’ll use the rest api no worries.
Le mar. 14 mars 2023 à 18:49, Robin H. Johnson a
écrit :
> On Tue, Mar 14, 2023 at 06:34:54PM +0100, Gaël THEROND wrote:
> > Hi everyone, I’ve got a quick question regarding one of our RadosGW
> bucket.
> >
> > This
Hi everyone, I’ve got a quick question regarding one of our RadosGW bucket.
This bucket is used to store docker registries, and the total amount of
data we use is supposed to be 4.5Tb BUT it looks like ceph told us we
rather use ~53Tb of data.
One interesting thing is, this bucket seems to shard
chase projects that are putting pressure on the cluster even if the
Openstack platform is having QoS in place all over ah ah :-)
Le mer. 23 févr. 2022 à 16:57, Eugen Block a écrit :
> That is indeed unexpected, but good for you. ;-) Is the rest of the
> cluster healthy now?
>
> Zitat von
!
Le mer. 23 févr. 2022 à 12:51, Gaël THEROND a
écrit :
> Thanks a lot Eugene, I dumbly forgot about the rbd block prefix!
>
> I’ll try that this afternoon and told you how it went.
>
> Le mer. 23 févr. 2022 à 11:41, Eugen Block a écrit :
>
>> Hi,
>>
>> > How
ted you can check the mon daemon:
>
> ceph daemon mon. sessions
>
> The mon daemon also has a history of slow ops:
>
> ceph daemon mon. dump_historic_slow_ops
>
> Regards,
> Eugen
>
>
> Zitat von Gaël THEROND :
>
> > Hi everyone, I'm having a really nas
Hi everyone, I'm having a really nasty issue since around two days where
our cluster report a bunch of SLOW_OPS on one of our OSD as:
https://paste.openstack.org/show/b3DkgnJDVx05vL5o4OmY/
Here is the cluster specification:
* Used to store Openstack related data (VMs/Snaphots/Volumes/Swift).
ng the same pool name.
Thanks a lot for your following by the way and sorry for the really late
answer!
Le lun. 11 janv. 2021 à 13:38, Ilya Dryomov a écrit :
> On Mon, Jan 11, 2021 at 10:09 AM Gaël THEROND
> wrote:
> >
> > Hi Ilya,
> >
> > Here is additional inform
jnmZ4g
Here is the complete kernel logs: https://pastebin.com/SNucPXZW
Thanks a lot for your answer, I hope these logs can help ^^
Le ven. 8 janv. 2021 à 21:23, Ilya Dryomov a écrit :
> On Fri, Jan 8, 2021 at 2:19 PM Gaël THEROND
> wrote:
> >
> > Hi everyone!
> >
>
Hi everyone!
I'm facing a weird issue with one of my CEPH clusters:
OS: CentOS - 8.2.2004 (Core)
CEPH: Nautilus 14.2.11 - stable
RBD using erasure code profile (K=3; m=2)
When I want to format one of my RBD image (client side) I've got the
following kernel messages multiple time with different s
h failure), but not responding for any
>commands
>
> Regards
> Mateusz Skała
>
>
> On Tue, 13 Oct 2020 at 11:25, Gaël THEROND
> wrote:
>
>> This error means your quorum didn’t formed.
>>
>> How much mon nodes do you have usually and how mu
This error means your quorum didn’t formed.
How much mon nodes do you have usually and how much went down?
Le mar. 13 oct. 2020 à 10:56, Mateusz Skała a
écrit :
> Hello Community,
> I have problems with ceph-mons in docker. Docker pods are starting but I
> got a lot of messages "e6 handle_auth_
point about getting the container alive by using
> `sleep` is important. Then you can get into the container with `exec` and
> do what you need to.
>
>
> https://rook.io/docs/rook/v1.4/ceph-disaster-recovery.html#restoring-mon-quorum
>
>
> On Oct 12, 2020, at 4:16 PM, Gaël
Hi everyone,
Because of unfortunate events, I’ve a containers based ceph cluster
(nautilus) in a bad shape.
One of the lab cluster which is only made of 2 nodes as control plane (I
know it’s bad :-)) each of these nodes run a mon, a mgr and a rados-gw
containerized ceph_daemon.
They were install
28 matches
Mail list logo