[ceph-users] Using multiple SSDs as DB

2022-10-21 Thread Christian
it sounds like it would limit the amount of SSDs used for DB devices. How can I use all of the SSDs‘ capacity? Best, Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Using multiple SSDs as DB

2022-10-25 Thread Christian
would have) resulted in extra 8 Ceph OSDs with no db device. Best, Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Small issue with perms

2024-07-18 Thread Christian Rohmann
NZHCHABDF4/ * https://tracker.ceph.com/issues/64548 * Reef backport (NOT merged yet): https://github.com/ceph/ceph/pull/58458 Maybe your issue is somewhat related? Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscri

[ceph-users] Re: Release 18.2.4

2024-07-24 Thread Christian Rohmann
tioned by k0ste as someone who might know more about and could make changes to "the flow" Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Recovering from total mon loss and backing up lockbox secrets

2024-08-06 Thread Christian Rohmann
by poelzl to add automatic backups: https://github.com/ceph/ceph/pull/56772 Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: DB sizing for lots of large files

2020-11-26 Thread Christian Wuerdig
Sorry, I replied to the wrong email thread before, so reposting this: I think it's time to start pointing out the the 3/30/300 logic not really holds any longer true post Octopus: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/CKRCB3HUR7UDRLHQGC7XXZPWCWNJSBNT/ Although I suppose i

[ceph-users] Re: Advice on SSD choices for WAL/DB?

2020-11-26 Thread Christian Wuerdig
I think it's time to start pointing out the the 3/30/300 logic not really holds any longer true post Octopus: https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/CKRCB3HUR7UDRLHQGC7XXZPWCWNJSBNT/ On Thu, 2 Jul 2020 at 00:09, Burkhard Linke < burkhard.li...@computational.bio.uni-giesse

[ceph-users] Re: Failure Domain = NVMe?

2021-03-11 Thread Christian Wuerdig
For EC 8+2 you can get away with 5 hosts by ensuring each host gets 2 shards similar to this: https://ceph.io/planet/erasure-code-on-small-clusters/ If a host dies/goes down you can still recover all data (although at that stage your cluster is no longer available for client io). You shouldn't just

[ceph-users] Re: Can I create 8+2 Erasure coding pool on 5 node?

2021-03-27 Thread Christian Wuerdig
Once you have your additional 5 nodes you can adjust your crushrule to have failure domain = host and ceph will rebalance the data automatically for you. This will involve quite a bit of data movement (at least 50% of your data will need to be migrated) so can take some time. Also the official reco

[ceph-users] Re: OT: How to Build a poor man's storage with ceph

2021-06-08 Thread Christian Wuerdig
Since you mention NextCloud it will probably be RWG deployment. ALso it's not clear why 3 nodes? Is rack-space a premium? Just to compare your suggestion: 3x24 (I guess 4U?) x 8TB with Replication = 576 TB raw storage + 192 TB usable Let's go 6x12 (2U) x 4TB with EC 3+2 = 288 TB raw storage + 172

[ceph-users] RADOSGW Keystone integration - S3 bucket policies targeting not just other tenants / projects ?

2021-06-16 Thread Christian Rohmann
e (https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/GY7VUKCQ5QUMDYSFUJE233FKBRADXRZK/#GY7VUKCQ5QUMDYSFUJE233FKBRADXRZK) but unfortunately with no discussion / responses then. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.

[ceph-users] rgw multisite sync not syncing data, error: RGW-SYNC:data:init_data_sync_status: ERROR: failed to read remote data log shards

2021-06-25 Thread Christian Rohmann
log shards Does anybody have any hints on where to look for what could be broken here? Thanks a bunch, Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: rgw multisite sync not syncing data, error: RGW-SYNC:data:init_data_sync_status: ERROR: failed to read remote data log shards

2021-06-27 Thread Christian Rohmann
Hey Dominic, thanks for your quick response! On 25/06/2021 19:45, dhils...@performair.com wrote: Christian; Do the second site's RGW instance(s) have access to the first site's OSDs? Is the reverse true? It's been a while since I set up the multi-site sync between our clust

[ceph-users] Re: ceph df (octopus) shows USED is 7 times higher than STORED in erasure coded pool

2021-07-06 Thread Christian Wuerdig
Ceph on a single host makes little to no sense. You're better of running something like ZFS On Tue, 6 Jul 2021 at 23:52, Wladimir Mutel wrote: > I started my experimental 1-host/8-HDDs setup in 2018 with > Luminous, > and I read > https://ceph.io/community/new-luminous-erasure-co

[ceph-users] RocksDB degradation / manual compaction vs. snaptrim operations choking Ceph to a halt

2021-07-07 Thread Christian Rohmann
cient then? Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: rgw multisite sync not syncing data, error: RGW-SYNC:data:init_data_sync_status: ERROR: failed to read remote data log shards

2021-07-07 Thread Christian Rohmann
We found the issue causing data not being synced On 25/06/2021 18:24, Christian Rohmann wrote: What is apparently not working in the sync of actual data. Upon startup the radosgw on the second site shows: 2021-06-25T16:15:06.445+ 7fe71eff5700  1 RGW-SYNC:meta: start 2021-06-25T16:15

[ceph-users] Re: RocksDB degradation / manual compaction vs. snaptrim operations choking Ceph to a halt

2021-07-08 Thread Christian Rohmann
otes. I suppose with the EoL of Nautilus more and more clusters will now make the jump to the Octopus release and convert their OSDs to OMAP in the process. Even if not all clusters RocksDBs would go over the edge, in any case running a compaction should not hurt right? Thanks aga

[ceph-users] Re: pgcalc tool removed (or moved?) from ceph.com ?

2021-07-08 Thread Christian Rohmann
ecause the cert is only valid vor old.ceph.com Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: pgcalc tool removed (or moved?) from ceph.com ?

2021-07-08 Thread Christian Rohmann
cate * but https://ceph.com/pgcalc/ is not rewritten to the old.ceph.com domain and thus the certificate error because the cert is only valid vor old.ceph.com Regards Christian thanks for the answer :) i still get a 404 on ceph.com/pgcalc (and no redirect to old.ceph.com also no cert m

[ceph-users] Re: Multiple DNS names for RGW?

2021-08-17 Thread Christian Rohmann
rrent master zone? The intention would be to avoid involving the clients having to update their endpoint in case of a failover. Thanks and with kind regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Multiple DNS names for RGW?

2021-08-17 Thread Christian Rohmann
aster? From what you said I read that I cannot: a) use an additonal rgw_dns_name, as only one can be configured (right?) b) simply rewrite the hostname from the frontend-proxy / lb to the backends as this will invalidate the sigv4 the clients do? Regards Chri

[ceph-users] Re: [Suspicious newsletter] Re: create a Multi-zone-group sync setup

2021-08-18 Thread Christian Rohmann
ers to select one of them? Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: What's your biggest ceph cluster?

2021-09-02 Thread Christian Wuerdig
This probably provides a reasonable overview - https://ceph.io/en/news/blog/2020/public-telemetry-dashboards/, specifically the grafana dashboard is here: https://telemetry-public.ceph.com Keep in mind not all clusters have telemetry enabled The largest recorded cluster seems to be in the 32-64PB

[ceph-users] Re: RocksDB options for HDD, SSD, NVME Mixed productions

2021-09-21 Thread Christian Wuerdig
It's been discussed a few times on the list but RocksDB levels essentially grow by a factor of 10 (max_bytes_for_level_multiplier) by default and you need (level-1)*10 space for the next level on your drive to avoid spill over So the sequence (by default) is 256MB -> 2.56GB -> 25.6GB -> 256GB GB a

[ceph-users] Re: RocksDB options for HDD, SSD, NVME Mixed productions

2021-09-21 Thread Christian Wuerdig
them. Somebody else would have to chime in to confirm. Also keep in mind that even with 60GB partition you will still get spillover since you seem to have around 120-130GB meta data per OSD so moving to 160GB partitions would seem to be better. > > > > > > > Christian Wuerdig , 21

[ceph-users] Re: RocksDB options for HDD, SSD, NVME Mixed productions

2021-09-21 Thread Christian Wuerdig
bo > Senior Infrastructure Engineer > --- > Agoda Services Co., Ltd. > e: istvan.sz...@agoda.com > --- > > On 2021. Sep 21., at 9:19, Christian Wuerdig > wrote: > > Email received fro

[ceph-users] Re: RocksDB options for HDD, SSD, NVME Mixed productions

2021-09-21 Thread Christian Wuerdig
ter with ec 4:2 :(( > > Istvan Szabo > Senior Infrastructure Engineer > --- > Agoda Services Co., Ltd. > e: istvan.sz...@agoda.com > --- > > On 2021. Sep 21., at 20:21, Christ

[ceph-users] Re: Corruption on cluster

2021-09-21 Thread Christian Wuerdig
This tracker item should cover it: https://tracker.ceph.com/issues/51948 On Wed, 22 Sept 2021 at 11:03, Nigel Williams wrote: > > Could we see the content of the bug report please, that RH bugzilla entry > seems to have restricted access. > "You are not authorized to access bug #1996680." > > On

[ceph-users] Re: Limiting osd or buffer/cache memory with Pacific/cephadm?

2021-09-28 Thread Christian Wuerdig
buff/cache is the Linux kernel buffer and page cache which is unrelated to the ceph bluestore cache. Check the memory consumption of your individual OSD processes to confirm. Top also says 132GB available (since buffers and page cache entries will be dropped automatically if processes need more RAM

[ceph-users] Re: osd_memory_target=level0 ?

2021-09-29 Thread Christian Wuerdig
Bluestore memory targets have nothing to do with spillover. It's already been said several times: The spillover warning is simply telling you that instead of writing data to your supposedly fast wal/blockdb device it's now hitting your slow device. You've stated previously that your fast device is

[ceph-users] bucket_index_max_shards vs. no resharding in multisite? How to brace RADOS for huge buckets

2021-09-30 Thread Christian Rohmann
zone has  bucket_index_max_shards=11 Should I align this and use "11" as the default static number of shards for all new buckets then? Maybe an even higher (prime) number just to be save? Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: bucket_index_max_shards vs. no resharding in multisite? How to brace RADOS for huge buckets

2021-09-30 Thread Christian Rohmann
On 30/09/2021 17:02, Christian Rohmann wrote: Looking at my zones I can see that the master zone (converted from previously single-site setup) has  bucket_index_max_shards=0 while the other, secondary zone has  bucket_index_max_shards=11 Should I align this and use "11" as t

[ceph-users] Re: osd_memory_target=level0 ?

2021-09-30 Thread Christian Wuerdig
mory buffers. On Thu, 30 Sept 2021 at 21:02, Szabo, Istvan (Agoda) wrote: > > Hi Christian, > > Yes, I very clearly know what is spillover, read that github leveled document > in the last couple of days every day multiple time. (Answers for your > questions are after the c

[ceph-users] Re: osd_memory_target=level0 ?

2021-09-30 Thread Christian Wuerdig
That is - one thing you could do is to rate limit PUT requests on your haproxy down to a level that your cluster is stable. At least that gives you a chance to finish the PG scaling without OSDs dying on you constantly On Fri, 1 Oct 2021 at 11:56, Christian Wuerdig wrote: > > Ok, so I

[ceph-users] Re: Multisite reshard stale instances

2021-10-01 Thread Christian Rohmann
some stale instances on master as secondary site after migrating from a single site to multisite. Did you ever find out what to do about those stale instances then? Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe

[ceph-users] Re: Multisite reshard stale instances

2021-10-01 Thread Christian Rohmann
r to be growing, but still, I'd like to clean those up. I did not explicitly disable dynamic sharding in ceph.conf until recently - but question is, if this was even necessary since RGW does recognize when it's running in multisite sync. Regards Christian __

[ceph-users] Re: osd_memory_target=level0 ?

2021-10-01 Thread Christian Wuerdig
, Szabo, Istvan (Agoda) wrote: > > Thank you very much Christian, maybe you have idea how can I take out the > cluster from this state? Something blocks the recovery and the rebalance, > something stuck somewhere, thats why can’t increase the pg further. > I don’t have auto pg s

[ceph-users] Re: Multisite reshard stale instances

2021-10-04 Thread Christian Rohmann
ared up? Also just like for the other reporters of this issue, in my case most buckets are deleted buckets, but not all of them. I just hope somebody with a little more insight on the mechanisms at play here joins this conversation. Regards Christian _

[ceph-users] Re: Multisite reshard stale instances

2021-10-04 Thread Christian Rohmann
On 04/10/2021 12:22, Christian Rohmann wrote: So there is no reason those instances are still kept? How and when are those instances cleared up? Also just like for the other reporters of this issue, in my case most buckets are deleted buckets, but not all of them. I just hope somebody with a

[ceph-users] Re: Erasure coded pool chunk count k

2021-10-05 Thread Christian Wuerdig
A couple of notes to this: Ideally you should have at least 2 more failure domains than your base resilience (K+M for EC or size=N for replicated) - reasoning: Maintenance needs to be performed so chances are every now and then you take a host down for a few hours or possibly days to do some upgra

[ceph-users] Re: CEPH 16.2.x: disappointing I/O performance

2021-10-05 Thread Christian Wuerdig
Maybe some info is missing but 7k write IOPs at 4k block size seem fairly decent (as you also state) - the bandwidth automatically follows from that so not sure what you're expecting? I am a bit puzzled though - by my math 7k IOPS at 4k should only be 27MiB/sec - not sure how the 120MiB/sec was ach

[ceph-users] Re: CEPH 16.2.x: disappointing I/O performance

2021-10-05 Thread Christian Wuerdig
as well, suggested that in a > replicated pool writes and reads are handled by the primary PG, which would > explain this write bandwidth limit. > > /Z > > On Tue, 5 Oct 2021, 22:31 Christian Wuerdig, > wrote: > >> Maybe some info is missing but 7k write IOP

[ceph-users] Re: Metrics for object sizes

2021-10-14 Thread Christian Rohmann
bucket stats --bucket mybucket Doing a bucket_size / number_of_objects gives you an average object size per bucket and that certainly is an indication on buckets with rather small objects. Regards Christian ___ ceph-users mailing list -- ceph-users

[ceph-users] Packages for 17.2.7 released without release notes / announcement (Re: Re: Status of Quincy 17.2.5 ?)

2023-10-30 Thread Christian Rohmann
Sorry to dig up this old thread ... On 25.01.23 10:26, Christian Rohmann wrote: On 20/10/2022 10:12, Christian Rohmann wrote: 1) May I bring up again my remarks about the timing: On 19/10/2022 11:46, Christian Rohmann wrote: I believe the upload of a new release to the repo prior to the

[ceph-users] Automatic triggering of the Ubuntu SRU process, e.g. for the recent 17.2.7 Quincy point release?

2023-11-12 Thread Christian Rohmann
Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: EC Profiles & DR

2023-12-05 Thread Christian Wuerdig
You can structure your crush map so that you get multiple EC chunks per host in a way that you can still survive a host outage outage even though you have fewer hosts than k+1 For example if you run an EC=4+2 profile on 3 hosts you can structure your crushmap so that you have 2 chunks per host. Thi

[ceph-users] RGW rate-limiting or anti-hammering for (external) auth requests // Anti-DoS measures

2023-12-22 Thread Christian Rohmann
ut in place? * Does it make sense to extend RGWs capabilities to deal with those cases itself? ** adding negative caching ** rate limits on concurrent external authentication requests (or is there a pool of connections for those requests?) Regards Christian [1] https://docs.ceph.com

[ceph-users] Re: cephadm - podman vs docker

2023-12-31 Thread Christian Wuerdig
General complaint about docker is usually that it by default stops all running containers when the docker daemon gets shutdown. There is the "live-restore" option (which has been around for a while) but that's turned off by default (and requires a daemon restart to enable). It only supports patch u

[ceph-users] Re: RGW rate-limiting or anti-hammering for (external) auth requests // Anti-DoS measures

2024-01-09 Thread Christian Rohmann
Happy New Year Ceph-Users! With the holidays and people likely being away, I take the liberty to bluntly BUMP this question about protecting RGW from DoS below: On 22.12.23 10:24, Christian Rohmann wrote: Hey Ceph-Users, RGW does have options [1] to rate limit ops or bandwidth per bucket

[ceph-users] Re: RGW rate-limiting or anti-hammering for (external) auth requests // Anti-DoS measures

2024-01-12 Thread Christian Rohmann
ystem (Keystone in my case) at full rate. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: 3 DC with 4+5 EC not quite working

2024-01-14 Thread Christian Wuerdig
I could be wrong however as far as I can see you have 9 chunks which requires 9 failure domains. Your failure domain is set to datacenter which you only have 3 of. So that won't work. You need to set your failure domain to host and then create a crush rule to choose a DC and choose 3 hosts within

[ceph-users] Re: how can install latest dev release?

2024-01-31 Thread Christian Rohmann
containers being built somewhere to use with cephadm. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Throughput metrics missing iwhen updating Ceph Quincy to Reef

2024-02-01 Thread Christian Rohmann
wondering if ceph-exporter ([2] is also built and packaged via the ceph packages [3] for installations that use them? Regards Christian [1] https://docs.ceph.com/en/latest/mgr/prometheus/#ceph-daemon-performance-counters-metrics [2] https://github.com/ceph/ceph/tree/main/src/exporter [3

[ceph-users] Re: how can install latest dev release?

2024-02-01 Thread Christian Rohmann
uot;latest" documentation is at https://docs.ceph.com/en/latest/install/get-packages/#ceph-development-packages. But it seems nothing has changed. There are dev packages available at the URLs mentioned there. Regards Christian ___ ceph-users maili

[ceph-users] Re: Throughput metrics missing iwhen updating Ceph Quincy to Reef

2024-02-05 Thread Christian Rohmann
On 01.02.24 10:10, Christian Rohmann wrote: [...] I am wondering if ceph-exporter ([2] is also built and packaged via the ceph packages [3] for installations that use them? [2] https://github.com/ceph/ceph/tree/main/src/exporter [3] https://docs.ceph.com/en/latest/install/get-packages/ I

[ceph-users] ceph-crash NOT reporting crashes due to wrong permissions on /var/lib/ceph/crash/posted (Debian / Ubuntu packages)

2024-02-23 Thread Christian Rohmann
metry. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: ceph-crash NOT reporting crashes due to wrong permissions on /var/lib/ceph/crash/posted (Debian / Ubuntu packages)

2024-02-29 Thread Christian Rohmann
On 23.02.24 16:18, Christian Rohmann wrote: I just noticed issues with ceph-crash using the Debian /Ubuntu packages (package: ceph-base): While the /var/lib/ceph/crash/posted folder is created by the package install, it's not properly chowned to ceph:ceph by the postinst s

[ceph-users] Re: debian-reef_OLD?

2024-03-05 Thread Christian Rohmann
On 04.03.24 22:24, Daniel Brown wrote: debian-reef/ Now appears to be: debian-reef_OLD/ Could this have been  some sort of "release script" just messing up the renaming / symlinking to the most recent stable? Regards Christian ___

[ceph-users] Hanging request in S3

2024-03-06 Thread Christian Kugler
I did this multiple times and it seems to always be shard 34 that has the issue Did someone see something like this before? Any ideas how to remedy the situation or at least where to or what to look for? Best, Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: rgw dynamic bucket sharding will hang io

2024-03-08 Thread Christian Rohmann
you mean by blocking IO? No bucket actions (read / write) or high IO utilization? Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: rgw dynamic bucket sharding will hang io

2024-03-08 Thread Christian Rohmann
On 08.03.24 14:25, Christian Rohmann wrote: What do you mean by blocking IO? No bucket actions (read / write) or high IO utilization? According to https://docs.ceph.com/en/latest/radosgw/dynamicresharding/ "Writes to the target bucket are blocked (but reads are not) briefly during resha

[ceph-users] Re: Journal size recommendations

2024-03-08 Thread Christian Rohmann
"This section applies only to the older Filestore OSD back end. Since Luminous BlueStore has been default and preferred." It's totally obsolete with bluestore. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io

[ceph-users] Re: Hanging request in S3

2024-03-12 Thread Christian Kugler
Hi Casey, Interesting. Especially since the request it hangs on is a GET request. I set the option and restarted the RGW I test with. The POSTs for deleting take a while but there are not longer blocking GET or POST requests. Thank you! Best, Christian PS: Sorry for pressing the wrong reply

[ceph-users] Re: rgw s3 bucket policies limitations (on users)

2024-04-02 Thread Christian Rohmann
. I would love for RGW to support more detailed bucket policies, especially with external / Keystone authentication. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-18 Thread Christian Rohmann
up in one my clusters. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: reef 18.2.3 QE validation status

2024-04-18 Thread Christian Rohmann
8.2.4 milestone so it's sure to be picked up. Thanks a bunch. If you miss the train, you miss the train - fair enough. Nice to know there is another one going soon and that bug is going to be on it ! Regards Christian ___ ceph-users mailin

[ceph-users] Re: Multisite: metadata behind on shards

2024-05-13 Thread Christian Rohmann
rlier versions. But there have been lots of fixes in this area ... e.g. https://tracker.ceph.com/issues/39657 Is upgrading Ceph to a more recent version an option for you? Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.

[ceph-users] RGW multisite Cloud Sync module with support for client side encryption?

2022-09-12 Thread Christian Rohmann
es require users to actively make use of SSE-S3, right? Thanks again with kind regards, Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: rgw multisite octopus - bucket can not be resharded after cancelling prior reshard process

2022-10-13 Thread Christian Rohmann
https://tracker.ceph.com/projects/rgw/issues?query_id=247 But you are not syncing the data in your deployment? Maybe that's a different case then? Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to cep

[ceph-users] Re: Status of Quincy 17.2.5 ?

2022-10-19 Thread Christian Rohmann
final release and update notes. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Mirror de.ceph.com broken?

2022-10-20 Thread Christian Rohmann
debian-17.2.4/ return 404. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Status of Quincy 17.2.5 ?

2022-10-20 Thread Christian Rohmann
week. Thanks for the info. 1) May I bring up again my remarks about the timing: On 19/10/2022 11:46, Christian Rohmann wrote: I believe the upload of a new release to the repo prior to the announcement happens quite regularly - it might just be due to the technical process of releasing. But I

[ceph-users] Re: 16.2.11 branch

2022-10-28 Thread Christian Rohmann
.2.11 which we are waiting for. TBH I was about to ask if it would not be sensible to do an intermediate release and not let it grow bigger and bigger (with even more changes / fixes)  going out at once. Regards Christian ___ ceph-users mailing

[ceph-users] Re: RGW replication and multiple endpoints

2022-11-14 Thread Christian Rohmann
ple distinct RGW in both zones. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Cloud sync to minio fails after creating the bucket

2022-11-21 Thread Christian Rohmann
es/57807) about Cloud Sync being broken since Pacific? Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Cloud sync to minio fails after creating the bucket

2022-11-21 Thread Christian Rohmann
But there is a fix commited, pending backports to Quincy / Pacific: https://tracker.ceph.com/issues/57306 Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] RGW Forcing buckets to be encrypted (SSE-S3) by default (via a global bucket encryption policy)?

2022-11-23 Thread Christian Rohmann
reators to apply such a policy themselves, but to apply this as a global default in RGW, forcing all buckets to have SSE enabled - transparently. If there is no way to achieve this just yet, what are your thoughts about adding such an option to RGW? Regards

[ceph-users] Re: RGW Forcing buckets to be encrypted (SSE-S3) by default (via a global bucket encryption policy)?

2022-11-23 Thread Christian Rohmann
On 23/11/2022 13:36, Christian Rohmann wrote: I am wondering if there are other options to ensure data is encrypted at rest and also only replicated as encrypted data ... I should have referenced thread https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread

[ceph-users] Re: 16.2.11 branch

2022-12-15 Thread Christian Rohmann
xes in them. Thanks a bunch! Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: 16.2.11 branch

2022-12-15 Thread Christian Rohmann
On 15/12/2022 10:31, Christian Rohmann wrote: May I kindly ask for an update on how things are progressing? Mostly I am interested on the (persisting) implications for testing new point releases (e.g. 16.2.11) with more and more bugfixes in them. I guess I just have not looked on the right

[ceph-users] Re: OSD slow ops warning not clearing after OSD down

2023-01-16 Thread Christian Rohmann
total failure of an OSD ? Would be nice to fix this though to not "block" the warning status with something that's not actually a warning. Regards Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send a

[ceph-users] Re: Status of Quincy 17.2.5 ?

2023-01-25 Thread Christian Rohmann
Hey everyone, On 20/10/2022 10:12, Christian Rohmann wrote: 1) May I bring up again my remarks about the timing: On 19/10/2022 11:46, Christian Rohmann wrote: I believe the upload of a new release to the repo prior to the announcement happens quite regularly - it might just be due to the

[ceph-users] Renaming a ceph node

2023-02-13 Thread Rice, Christian
Can anyone please point me at a doc that explains the most efficient procedure to rename a ceph node WITHOUT causing a massive misplaced objects churn? When my node came up with a new name, it properly joined the cluster and owned the OSDs, but the original node with no devices remained. I expe

[ceph-users] Re: [EXTERNAL] Re: Renaming a ceph node

2023-02-15 Thread Rice, Christian
name and starting it with the new name. > You only must keep the ID from the node in the crushmap! > > Regards > Manuel > > > On Mon, 13 Feb 2023 22:22:35 + > "Rice, Christian" wrote: > >> Can anyone please point me at a doc that explains the most &

[ceph-users] Trying to throttle global backfill

2023-03-08 Thread Rice, Christian
I have a large number of misplaced objects, and I have all osd settings to “1” already: sudo ceph tell osd.\* injectargs '--osd_max_backfills=1 --osd_recovery_max_active=1 --osd_recovery_op_priority=1' How can I slow it down even more? The cluster is too large, it’s impacting other network t

[ceph-users] Re: Trying to throttle global backfill

2023-03-09 Thread Rice, Christian
ciative of the community response. I learned a lot in the process, had an outage-inducing scenario rectified very quickly, and got back to work. Thanks so much! Happy to answer any followup questions and return the favor when I can. From: Rice, Christian Date: Wednesday, March 8, 2023 at 3:57 P

[ceph-users] External Auth (AssumeRoleWithWebIdentity) , STS by default, generic policies and isolation by ownership

2023-03-15 Thread Christian Rohmann
ow users to create their own roles and policies to use them by default? All the examples talk about the requirement for admin caps and individual setting of '--caps="user-policy=*'. If there was a default role + policy (question #1) that could be applied to externally authenti

[ceph-users] Re: Eccessive occupation of small OSDs

2023-04-02 Thread Christian Wuerdig
With failure domain host your max usable cluster capacity is essentially constrained by the total capacity of the smallest host which is 8TB if I read the output correctly. You need to balance your hosts better by swapping drives. On Fri, 31 Mar 2023 at 03:34, Nicola Mori wrote: > Dear Ceph user

[ceph-users] Re: pg_autoscaler using uncompressed bytes as pool current total_bytes triggering false POOL_TARGET_SIZE_BYTES_OVERCOMMITTED warnings?

2023-04-21 Thread Christian Rohmann
enlighten me. Thank you and with kind regards Christian On 02/02/2022 20:10, Christian Rohmann wrote: Hey ceph-users, I am debugging a mgr pg_autoscaler WARN which states a target_size_bytes on a pool would overcommit the available storage. There is only one pool with value for

[ceph-users] Re: Encryption per user Howto

2023-05-22 Thread Christian Wuerdig
Hm, this thread is confusing in the context of S3 client-side encryption means - the user is responsible to encrypt the data with their own keys before submitting it. As far as I'm aware, client-side encryption doesn't require any specific server support - it's a function of the client SDK used whi

[ceph-users] RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-09 Thread Christian Theune
I guess that would be a good comparison for what timing to expect when running an update on the metadata. I’ll also be in touch with colleagues from Heinlein and 42on but I’m open to other suggestions. Hugs, Christian [1] We currently have 215TiB data in 230M objects. Using the “official

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-13 Thread Christian Theune
still 2.4 hours … Cheers, Christian > On 9. Jun 2023, at 11:16, Christian Theune wrote: > > Hi, > > we are running a cluster that has been alive for a long time and we tread > carefully regarding updates. We are still a bit lagging and our cluster (that > started around

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-14 Thread Christian Theune
few very large buckets (200T+) that will take a while to copy. We can pre-sync them of course, so the downtime will only be during the second copy. Christian > On 13. Jun 2023, at 14:52, Christian Theune wrote: > > Following up to myself and for posterity: > > I’m going to t

[ceph-users] RGW accessing real source IP address of a client (e.g. in S3 bucket policies)

2023-06-15 Thread Christian Rohmann
ately seems not even supposed by the BEAST library which RGW uses.     I opened feature requests ...      ** https://tracker.ceph.com/issues/59422      ** https://github.com/chriskohlhoff/asio/issues/1091      ** https://github.com/boostorg/beast/issues/2484    but there is no outcome yet. Rega

[ceph-users] Re: RGW accessing real source IP address of a client (e.g. in S3 bucket policies)

2023-06-15 Thread Christian Rohmann
, not the public IP of the client. So the actual remote address is NOT used in my case. Did I miss any config setting anywhere? Regards and thanks for your help Christian ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-16 Thread Christian Theune
id i get something wrong? > > > > > Kind regards, > Nino > > > On Wed, Jun 14, 2023 at 5:44 PM Christian Theune wrote: > Hi, > > further note to self and for posterity … ;) > > This turned out to be a no-go as well, because you can’t silently switch the &g

[ceph-users] Re: RGW: Migrating a long-lived cluster to multi-site, fixing an EC pool mistake

2023-06-21 Thread Christian Theune
zonegroups referring to the same pools and this should only run through proper abstractions … o_O Cheers, Christian > On 14. Jun 2023, at 17:42, Christian Theune wrote: > > Hi, > > further note to self and for posterity … ;) > > This turned out to be a no-go as well, becau

[ceph-users] ceph quincy repo update to debian bookworm...?

2023-06-22 Thread Christian Peters
://download.coeh.com/debian-quincy/ bullseye main to deb https://download.coeh.com/debian-quincy/ boowkworm main in the near future!? Regards, Christian OpenPGP_0xC20C05037880471C.asc Description: OpenPGP public key OpenPGP_signature Description: OpenPGP digital signature

[ceph-users] Bluestore compression - Which algo to choose? Zstd really still that bad?

2023-06-26 Thread Christian Rohmann
with the decision on the compression algo? Regards Christian [1] https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#confval-bluestore_compression_algorithm [2] https://github.com/ceph/ceph/pull/33790 [3] https://github.com

[ceph-users] Re: Radogw ignoring HTTP_X_FORWARDED_FOR header

2023-06-26 Thread Christian Rohmann
ot;bytes_sent":0,"bytes_received":64413,"object_size":64413,"total_time":155,"user_agent":"aws-sdk-go/1.27.0 (go1.16.15; linux; amd64) S3Manager","referrer":"","trans_id":"REDACTED","authentication_typ

  1   2   >