. It's true that I was looking
at a way to bind users to a realm, zonegroup or zone but I don't see
one. I don't think users are bound to zone in fact as there is no
related attribute I'm the user info if I'm right.
Michel
Sent from my mobile
Le 7 juillet 2025 18:25:31 Wi
Hello ,
I'm wondering if you guys have some good knowledges about the Clay
plugin for EC pools ? And what it compare to Jerasure for stability and
performances ?
https://docs.ceph.com/en/squid/rados/operations/erasure-code-clay/
Regards
___
ceph-
Hi ceph users,
We have two ceph cluster with the same hardware/CPU(ARM)/OS(debian
12)/network , and both deployed with cephadm ( default configuration , 2
osds per device nvme ) , except that one is on reef version and the
seco,d is on squid version.
We noticed a big difference in our bench
Hi Ceph Users,
I'm trying to test ceph radosgw bucket logging feature , but I keep
getting "MethodNotAllowed" Error, though I reinstalled the radosgw from
scratch (I recreated a new realm-zonegroup and zone).
Our ceph cluster is ceph squid v 19.2.2 on debian 12.
I followed the procedure in
int), unsigned
int, std::basic_string_view >)+0x78)
[0x55a9a87cdf48]",
"(ceph::osd::scheduler::PGScrubResched::run(OSD*, OSDShard*,
boost::intrusive_ptr&, ThreadPool::TPHandle&)+0x32) [0x55a9a897b2f2]",
"(OSD::ShardedOpWQ::_process(unsigned int,
ceph
Dear All,
After updating our ceph cluster from Octopus to Pacific , we got a lot of a
slow_ops on many osds ( which caused the cluster to become very slow ) .
We did our investiguation and search on the ceph-users list and we found that
rebuilding all OSD scan improve ( or fix ) the issue ( we h
Dear all,
We noticed that the issue we encounter happen exclusivly on one host amount
global of 10 hosts (almost the 8 osds on this host crashes periodically => ~3
times a week).
Is there any idea/suggestion ??
Thanks
ZjQcmQRYFpfptBannerEnd
Hi ,
I found more information in the OSD logs a
Hi ,
I found more information in the OSD logs about this assertion , may be it could
help =>
ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)
in thread 7f8002357700 thread_name:msgr-worker-2
*** Caught signal (Aborted) **
what(): buffer::end_of_buffer
ter
ate called after throwing an instance of
'ceph::buffer::v15_2_0::end_of_buffer'
Thank for your help
De : Wissem MIMOUNA
Envoyé : mercredi 23 mars 2022 13:37
À : ceph-users@ceph.io
Objet : [ceph-users] OSD crush on a new ceph cluster
Dear All, We recently installed a new ceph clus
Dear All,
We recently installed a new ceph cluster with ceph-ansible . Everything works
fine exepct we noticed last few days that some OSDs crushed .
Here below the log for more information.
Thanks for your help.
"crash_id": "2022-03-23T08:27:05.085966Z_xx",
"timestamp": "2022-03-23T08:2
stics from the cluster yourself...
Thanks,
Igor On 2/17/2022 5:43 PM, Wissem MIMOUNA wrote:
Hi Igor, Thank you very much this helped us to understand the root cause and
hope we will get a fix soon ( with new ceph release ) .
In the means time do you have any idea how to perdiocally check the zo
De : Igor Fedotov mailto:igor.fedo...@croit.io>>
Envoyé : jeudi 17 février 2022 16:01
À : Wissem MIMOUNA
mailto:wissem.mimo...@fiducialcloud.fr>>
Objet : Re: [ceph-users] OSDs crash randomnisly
Wissem, unfortunately there is no way to learn if zombies has appeared other
than
De : Igor Fedotov
Envoyé : jeudi 17 février 2022 16:01
À : Wissem MIMOUNA
Objet : Re: [ceph-users] OSDs crash randomnisly
Wissem, unfortunately there is no way to learn if zombies has appeared other
than runnig fsck. But I think this can be perfomed on a weekly or even monthly
basis - from
Dear,
Some ODSs on our ceph cluster crush with no explication .
Stop and Start of the crushed OSD daemon fixed the issue but this happend few
times and I just need to understand the reason.
For your information the error has been fixed in the log change in the octopus
release (https://github.com
janvier 2021 16:02
À : Wissem MIMOUNA
Cc : ceph-users@ceph.io
Objet : Re: [ceph-users] Re: Ceph RadosGW & OpenStack swift problem
Hi,
Changed switch-openrc and verified the project to be "admin". Unfortunately
problems stills.
I think I have configured the Ceph now some
The user rgwswift only for radosgw config ( do not use it in your file openrc )
use swift user instead . Also , keep the default project to admin (
os_project_name ) .
Rgds
De : Mika Saari
Envoyé : jeudi 7 janvier 2021 12:45
À : Wissem MIMOUNA
Cc : ceph-users@ceph.io
Objet : Re: [ceph-users
here>/swift/v1/AUTH_/test3 -X POST -H "X-Auth-Token:
> here> INFO:swiftclient:RESP STATUS: 401 Unauthorized
>
> Thanks a lot again,
> -Mika
>
> On Tue, Jan 5, 2021 at 11:19 AM Wissem MIMOUNA <
> wissem.mimo...@fiducialcloud.fr> wrote:
>
>
Yes , we have had object-map enabled .
rgds
-Message d'origine-
De : Jason Dillaman
Envoyé : mardi 5 janvier 2021 15:08
À : Wissem MIMOUNA
Cc : ceph-users@ceph.io
Objet : Re: [ceph-users] Timeout ceph rbd-nbd mounted image
On Tue, Jan 5, 2021 at 9:01 AM Wissem MIMOUNA
wrote:
&
Hi ,
Thank you for your feedback . It seems the error related to the fstrim run
once a week ( default ) .
Do you have more information about the NBD/XFS memeory pressure issues ?
Thanks
-Message d'origine-
De : Jason Dillaman
Envoyé : mardi 5 janvier 2021 14:42
À : Wissem MI
Hello ,
Looking for information about a timeout which occur once a week for a ceph rbd
image mounted on a machine using rbd-nbd (Linux Ubuntu machine).
The error found in 'dmseg' is below :
[798016.401469] block nbd0: Connection timed out [798016.401506] block nbd0:
shutting down sockets
Many T
Hi,
Which version of OpenStack do you have ? I guess , since Usurri ( or may be
even before ) swift authentification through keystone require the account in
url . You have to add this option in "/etc/ceph/ceph.conf" , section rgw "rgw
swift account in url = true" or do it via setting directly .
Hello ,
It could be related to the « erasure-code-profile » which is defined with
different k+m on master and secondary .
For the size of buckets I guess its probalbly due to the parameter :
compression enabled on secondary radosgw « rgw compression : yes » .
Regards
De : Scheurer François
En
22 matches
Mail list logo