Dear Ceph contributors
While our (new) rgw secondary zone is doing the initial data sync from our
master zone,
we noticed that the reported capacity usage was getting higher than on primary
zone:
Master Zone:
ceph version 14.2.5
zone parameters:
"log_meta": "fal
Hi everybody
Does somebody had experience with important performance degradations during
a pool deletion?
We are asking because we are going to delete a 370 TiB with 120 M objects and
have never done this in the past.
The pool is using erasure coding 8+2 on nvme ssd's with rocksdb/wal on nv
Hi Wissem
Thank you for your reply.
As the erasure-code-profile is a pool setting, it is on a lower layer and rgw
should be unaware and independent of it, but it could play role regarding this
know issue with space allocation and EC pools.
Anyway this does so seem to be the cause here; after r
From: Frank Schilder
Sent: Saturday, January 9, 2021 12:10 PM
To: Glen Baars; Scheurer François; ceph-users@ceph.io
Subject: Re: performance impact by pool deletion?
Hi all,
I deleted a ceph fs data pool (EC 8+2) of size 240TB with about 150M objects
and it had no observable
.ceph.com/issues/45765
[2] https://tracker.ceph.com/issues/47044
Zitat von Scheurer François :
> Hi everybody
>
>
>
> Does somebody had experience with important performance degradations during
>
> a pool deletion?
>
>
> We are asking because we are going to delete a 3
Dear All
We have the same question here, if anyone can help ... Thank you!
Cheers
Francois
From: ceph-users on behalf of P. O.
Sent: Friday, August 9, 2019 11:05 AM
To: ceph-us...@lists.ceph.com
Subject: [ceph-users] Multisite RGW - Large omap objects relate
Dear All
We have the same question here, if anyone can help ... Thank you!
We did not find any documentation about the steps to reset & restart the sync.
Especially the implications of 'bilog trim', 'mdlog trim' and 'datalog trim'.
Our secondary zone is read-only. Both master and secondary zone
Dear All
We are trying to remove old multipart uploads but get in trouble with some of
them having null characters:
rados -p zh-1.rgw.buckets.index rmomapkey
.dir.cb1594b3-a782-49d0-a19f-68cd48870a63.81880353.1.0
'_multipart_MBS-35a9b79c-f27d-44f2-804f-472ef0520816/CBB_BSSRV01/CBB_DiskImage
mail: francois.scheu...@everyware.ch
web: http://www.everyware.ch
____
From: Scheurer François
Sent: Saturday, May 8, 2021 12:09:14 PM
To: ceph-users@ceph.io
Subject: [ceph-users] rgw bug adding null characters in multipart object names
and in Etags
Dear All
We are
Engineer
Zurlindenstrasse 52a
CH-8003 Zürich
tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: francois.scheu...@everyware.ch
web: http://www.everyware.ch
From: Scheurer François
Sent: Thursday, May 13, 2021 12:09:12 PM
To: ceph-users@ceph.io
Subject: [ceph-users
__
From: Scheurer François
Sent: Thursday, May 13, 2021 2:36 PM
To: ceph-users@ceph.io
Subject: Re: rgw bug adding null characters in multipart object names and in
Etags
Hi All
listomapkeys is actually dealing correctly with the null chars and output them.
rmomapkey is not, but rados has a
Dear All
The rgw user metadata "default_storage_class" is not working as expected on
Nautilus 14.2.15.
See the doc: https://docs.ceph.com/en/nautilus/radosgw/placement/#user-placement
S3 API PUT with the header x-amz-storage-class:NVME is working as expected.
But without this header RGW sh
Dear All,
RGW provides atomic PUT in order to guarantee write consistency.
cf: https://ceph.io/en/news/blog/2011/atomicity-of-restful-radosgw-operations/
But my understanding is that the are no guarantee regarding the PUT order
sequence.
So basically, if doing a storage class migration:
aws s
Hello
Short question regarding journal-based rbd mirroring.
▪IO path with journaling w/o cache:
a. Create an event to describe the update
b. Asynchronously append event to journal object
c. Asynchronously update image once event is safe
d. Complete IO to client once update is safe
[cf.
htt
til corrected.
cheers
Francois Scheurer
--
EveryWare AG
François Scheurer
Senior Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich
tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: francois.scheu...@everyware.ch
web: http://www.everyware.ch
____
From: Scheurer F
Hi Frederic
For your point 3, the default_storage_class from the user info is apparently
ignored.
Setting it on Nautilus 14.2.15 had no impact and objects were still stored with
STANDARD.
Another issue is that some clients like s3cmd are per default explicitly using
STANDARD.
And even afte
Hi everyone
How can we display the true osd block size?
I get 64K for a hdd osd:
ceph daemon osd.0 config show | egrep --color=always
"alloc_size|bdev_block_size"
"bdev_block_size": "4096",
"bluefs_alloc_size": "1048576",
"bluefs_shared_alloc_size":
10
mail: francois.scheu...@everyware.ch
web: http://www.everyware.ch
From: Dan van der Ster
Sent: Thursday, February 10, 2022 4:33 PM
To: Scheurer François
Cc: Ceph Users
Subject: Re: [ceph-users] osd true blocksize vs bluestore_min_alloc_size
Hi,
When an osd
ww.everyware.ch
From: Igor Fedotov
Sent: Thursday, February 10, 2022 6:06 PM
To: Scheurer François; Dan van der Ster
Cc: Ceph Users
Subject: Re: [ceph-users] Re: osd true blocksize vs bluestore_min_alloc_size
Hi Fransois,
you should set debug_bluestore = 10 instead.
And then grep f
Dear Ceph Experts,
The docu about this rgw command is a bit unclear:
radosgw-admin bucket check --bucket --fix --check-objects
Is this command still maintained and safe to use? (we are still on nautilus)
Is it working with sharded buckets? and also in multi-site?
I heard it will clear inval
(resending to the new maillist)
Dear Casey, Dear All,
We tested the migration from Luminous to Nautilus and noticed two regressions
breaking the RGW integration in Openstack:
1) the following config parameter is not working on Nautilus but is valid on
Luminous and on Master:
Hi Thomas
To get the usage:
ceph osd df | sort -nk8
#VAR is the ratio to avg util
#WEIGHT is CRUSHMAP weight; typically the Disk capacity in TiB
#REWEIGHT is temporary (until osd restart or ceph osd set noout) WEIGHT
correction for manual rebalance
You can use for temporary rew
Dear Casey
Many thanks that's great to get your help!
Cheers
Francois
From: Casey Bodley
Sent: Thursday, March 5, 2020 3:57 PM
To: Scheurer François; ceph-users@ceph.io
Cc: Engelmann Florian; Rafael Weingärtner
Subject: Re: Fw: Incompatibil
Dear All
One ceph cluster is running with all daemons (mon, mgr, osd, rgw) having the
version 12.2.12.
Let's say we configure an additional radosgw instance with version 14.2.8,
configured with the same ceph cluster name, realm, zonegroup and zone as the
existing instances.
Is it dangerous
Hi Paul
Many thanks for your answer! very helpful!
Best Regards
Francois
From: Paul Emmerich
Sent: Friday, April 3, 2020 5:19 PM
To: Scheurer François
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] different RGW Versions on same ceph cluster
No
features' to adapt itself to the available featureset?
Is RGW really partly implemented in the OSD code? Or is just that some RGW
features depends on OSD features?
Thank you for your insights!
Cheers
Francois
____
From: Casey Bodley
Sent: Thursday,
Is RGW really partly implemented in the OSD code? Or is just that some RGW
features depends on OSD features?
Thank you for your insights!
Cheers
Francois
____
From: Casey Bodley
Sent: Thursday, March 5, 2020 3:57 PM
To: Scheurer François; ceph-u
Hi Adam
may I ask if by chance you did find a solution to your issue in the meantime?
with 2 clusters on squid 19.2.2 in a lab we also similar issues with create
bucket in secondary zonegroup.
Cheers
Francois
on secondary zonegroup (single zone):
2025-07-24T19:21:24.095774+02:00 ewge1-ceph
mail: francois.scheu...@everyware.ch
web: http://www.everyware.ch
From: Adam Prycki
Sent: Thursday, July 24, 2025 8:33 PM
To: Scheurer François; ceph-users@ceph.io
Subject: [signed OK] Re: [ceph-users] Re: How to create buckets in secondary
zonegroup?
Hi Francois
29 matches
Mail list logo