[ceph-users] Re: Radosgw log Custom Headers

2025-02-13 Thread Ansgar Jazdzewski
gt; On Wed, Feb 12, 2025 at 2:19 PM Paul JURCO wrote: >>> >>> Same here, it worked only after rgw service was restarted using this config: >>> rgw_log_http_headers http_x_forwarded_for >>> >>> -- >>> Paul Jurco >>> >>>

[ceph-users] Radosgw log Custom Headers

2025-02-12 Thread Ansgar Jazdzewski
Hi folks, i like to make sure that the RadosGW is using the X-Forwarded-For as source-ip for ACL's However i do not find the information in the logs: i have set (using beast): ceph config set global rgw_remote_addr_param http_x_forwarded_for ceph config set global rgw_log_http_headers http_x_forw

[ceph-users] SOLVED: How to Limit S3 Access to One Subuser

2024-09-03 Thread Ansgar Jazdzewski
Hi folks, I found countless questions but no real solution on how to have multiple subusers and buckets in one account while limiting access to a bucket to just one specific subuser. Here’s how I managed to make it work: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "DenyA

[ceph-users] Re: erasure-code-lrc Questions regarding repair

2024-03-07 Thread Ansgar Jazdzewski
s a bug in the LRC plugin. > But there hasn't been any response yet. > > [1] https://tracker.ceph.com/issues/61861 > > Zitat von Ansgar Jazdzewski : > > > hi folks, > > > > I currently test erasure-code-lrc (1) in a multi-room multi-rack setup. > > The

[ceph-users] Sharing our "Containerized Ceph and Radosgw Playground"

2024-02-22 Thread Ansgar Jazdzewski
Hi Folks, We are excited to announce plans for building a larger Ceph-S3 setup. To ensure its success, extensive testing is needed in advance. Some of these tests don't need a full-blown Ceph cluster on hardware but still require meeting specific logical requirements, such as a multi-site S3 setu

[ceph-users] Re: Reef 18.2.1 unable to join multi-side when rgw_dns_name is configured

2024-02-21 Thread Ansgar Jazdzewski
ns_name dev.s3.localhost ``` Am Mi., 21. Feb. 2024 um 17:34 Uhr schrieb Ansgar Jazdzewski : > > Hi folks, > > i just try to setup a new ceph s3 multisite-setup and it looks to me > that dns-style s3 is broken in multi-side as wehn rgw_dns_name is > configured the `radosgw-admin pe

[ceph-users] Reef 18.2.1 unable to join multi-side when rgw_dns_name is configured

2024-02-21 Thread Ansgar Jazdzewski
Hi folks, i just try to setup a new ceph s3 multisite-setup and it looks to me that dns-style s3 is broken in multi-side as wehn rgw_dns_name is configured the `radosgw-admin period update -commit`from the new mebe will not succeeded! it looks like when ever hostnames is configured it brakes on t

[ceph-users] erasure-code-lrc Questions regarding repair

2024-01-15 Thread Ansgar Jazdzewski
hi folks, I currently test erasure-code-lrc (1) in a multi-room multi-rack setup. The idea is to be able to repair a disk-failures within the rack itself to lower bandwidth-usage ```bash ceph osd erasure-code-profile set lrc_hdd \ plugin=lrc \ crush-root=default \ crush-locality=rack \ crush-fail

[ceph-users] persistent write-back cache and quemu

2022-06-30 Thread Ansgar Jazdzewski
Hi folks, I did a little testing with the persistent write-back cache (*1) we run ceph quincy 17.2.1 qemu 6.2.0 rbd.fio works with the cache, but as soon as we start we get something like error: internal error: process exited while connecting to monitor: Failed to open module: /usr/lib/x86_64-li

[ceph-users] Re: Inconsistent PGs after upgrade to Pacific

2022-06-23 Thread Ansgar Jazdzewski
~# rados listsnaps 200020744f4. -p $POOL > 200020744f4.: > cloneidsnapssizeoverlap > 110[] > head-0 > > > Is it save to assume that these objects belong to a somewhat broken snapshot > and can be removed safely without causin

[ceph-users] Re: Inconsistent PGs after upgrade to Pacific

2022-06-23 Thread Ansgar Jazdzewski
naffected? > > Is there any way to validate that theory? I am a bit hesitant to just > run "rmsnap". Could that cause inconsistent data to be written back to > the actual objects? > > > Best regards, > > Pascal > > > > Ansgar Jazdzewski wrote on 23.06.22

[ceph-users] Re: Inconsistent PGs after upgrade to Pacific

2022-06-23 Thread Ansgar Jazdzewski
Hi Pascal, We just had a similar situation on our RBD and had found some bad data in RADOS here is How we did it: for i in $(rados list-inconsistent-pg $POOL | jq -er .[]); do rados list-inconsistent-obj $i | jq -er .inconsistents[].object.name| awk -F'.' '{print $2}'; done we than found inconsi

[ceph-users] Re: RGW with keystone and dns-style buckets

2022-01-11 Thread Ansgar Jazdzewski
o Ansgar Am Mo., 10. Jan. 2022 um 14:52 Uhr schrieb Ansgar Jazdzewski : > > Hi folks, > > i try to get dns-style buckets running and stumbled across an issue with > tenants > > I can access the bucket like https://s3.domain/: but I > did not find a way to do it with DNS-Sty

[ceph-users] RGW with keystone and dns-style buckets

2022-01-10 Thread Ansgar Jazdzewski
Hi folks, i try to get dns-style buckets running and stumbled across an issue with tenants I can access the bucket like https://s3.domain/: but I did not find a way to do it with DNS-Style something like that https://_.s3.domain ! Do I miss something in the documentation? Thanks for your help!

[ceph-users] Re: upgraded to cluster to 16.2.6 PACIFIC

2021-11-09 Thread Ansgar Jazdzewski
> IIRC you get a HEALTH_WARN message that there are OSDs with old metadata > format. You can suppress that warning, but I guess operators feel like > they want to deal with the situation and get it fixed rather than ignore it. Yes, and if suppress the waning gets forgotten you run into other issue

[ceph-users] Re: upgraded to cluster to 16.2.6 PACIFIC

2021-11-09 Thread Ansgar Jazdzewski
Am Di., 9. Nov. 2021 um 11:08 Uhr schrieb Dan van der Ster : > > Hi Ansgar, > > To clarify the messaging or docs, could you say where you learned that > you should enable the bluestore_fsck_quick_fix_on_mount setting? Is > that documented somewhere, or did you have it enabled from previously? > Th

[ceph-users] upgraded to cluster to 16.2.6 PACIFIC

2021-11-08 Thread Ansgar Jazdzewski
Hi fellow ceph users, I did an upgrade from 14.2.23 to 16.2.6 not knowing that the current minor version had this nasty bug! [1] [2] we were able to resolve some of the omap issues in the rgw.index pool but still have 17pg's to fix in the rgw.meta and rgw.log pool! I have a couple of questions:

[ceph-users] OSDs crash after deleting unfound object in Nautilus 14.2.22

2021-09-09 Thread Ansgar Jazdzewski
Hi Folks, We had to delete some unfound objects in our cache to get our cluster back working! but after an hour we see OSD's crash we found that it is caused by the fact that we deleted the: "hit_set_8.3fc_archive_2021-09-09 08:25:58.520768Z_2021-09-09 08:26:18.907234Z" Object Crash-Log can be

[ceph-users] Re: Manually add monitor to a running cluster

2021-08-19 Thread Ansgar Jazdzewski
Hi, so yes I was assuming that the new mon is a member of the cluster, so packages are installed and ceph.conf in place! You also need to add the IP of the new mon to the ceph.conf when you are done and redistribute it to all members of the cluster. Ansgar Am Do., 19. Aug. 2021 um 15:30 Uhr schr

[ceph-users] Re: Manually add monitor to a running cluster

2021-08-19 Thread Ansgar Jazdzewski
Hi, Am Do., 19. Aug. 2021 um 14:57 Uhr schrieb Francesco Piraneo G. : > > > >mkdir /var/lib/ceph/mon/ceph-$(hostname -s) > > This has to be done on new host, right? Yes > >ceph auth get mon. -o /tmp/mon-keyfile > >ceph mon getmap -o /tmp/mon-monmap > This has to be done on the runni

[ceph-users] Re: Manually add monitor to a running cluster

2021-08-19 Thread Ansgar Jazdzewski
Hi Francesco, in short you need to do this: mkdir /var/lib/ceph/mon/ceph-$(hostname -s) ceph auth get mon. -o /tmp/mon-keyfile ceph mon getmap -o /tmp/mon-monmap ceph-mon -i $(hostname -s) --mkfs --monmap /tmp/mon-monmap --keyring /tmp/mon-keyfile chown -R ceph: /var/lib/ceph/mon/ceph-$

[ceph-users] Re: 1/3 mons down! mon do not rejoin

2021-07-26 Thread Ansgar Jazdzewski
When did it leave the cluster? > > > I also found that the rocksdb on osd01 is only 1MB in size and 345MB on the > > other mons! > > It sounds like mon.osd01's db has been re-initialized as empty, e.g. > maybe the directory was lost somehow between reboots? > &g

[ceph-users] Re: 1/3 mons down! mon do not rejoin

2021-07-26 Thread Ansgar Jazdzewski
rst failed before the > on-call team rebooted it? They might give a clue what happened to > start this problem, which maybe is still happening now. > > This looks similar but it was eventually found to be a network issue: > https://tracker.ceph.com/issues/48033 > > -- Dan > >

[ceph-users] Re: 1/3 mons down! mon do not rejoin

2021-07-25 Thread Ansgar Jazdzewski
16:28:43.418 7fcc613d8700 10 mon.osd01@0(probing) e1 cancel_probe_timeout (none scheduled) 2021-07-25 16:28:43.418 7fcc613d8700 10 mon.osd01@0(probing) e1 reset_probe_timeout 0x55c6b3553260 after 2 seconds still looks like a connection issue but I can connect! using telnet root@osd01:~# tel

[ceph-users] Re: 1/3 mons down! mon do not rejoin

2021-07-25 Thread Ansgar Jazdzewski
Am So., 25. Juli 2021 um 17:17 Uhr schrieb Dan van der Ster : > > > raise the min version to nautilus > > Are you referring to the min osd version or the min client version? yes sorry was not written clearly > I don't think the latter will help. > > Are you sure that mon.osd01 can reach those oth

[ceph-users] Re: 1/3 mons down! mon do not rejoin

2021-07-25 Thread Ansgar Jazdzewski
ould: > > 1. Investigate why mon.osd01 isn't coming back into the quorum... The logs on > that mon or the others can help. > 2. If you decide to give up on mon.osd01, then first you should rm it from > the cluster before you add a mon from another host. > > .. Dan > &

[ceph-users] 1/3 mons down! mon do not rejoin

2021-07-25 Thread Ansgar Jazdzewski
hi folks I have a cluster running ceph 14.2.22 on ubuntu 18.04 and some hours ago one of the mons stopped working and the on-call team rebooted the node; not the mon is is not joining the ceph-cluster. TCP ports of mons are open and reachable! ceph health detail HEALTH_WARN 1/3 mons down, quorum

[ceph-users] Re: suggestion for Ceph client network config

2021-06-11 Thread Ansgar Jazdzewski
Hi, I would do an extra network / VLAN mostly for security reasons, also take a look at CTDB for samba failover. Have a nice Weekend, Ansgar Am Fr., 11. Juni 2021 um 08:21 Uhr schrieb Götz Reinicke : > > Hi all > > We get a new samba smb fileserver who mounts our cephfs for exporting some > sha

[ceph-users] Re: CephFS design

2021-06-11 Thread Ansgar Jazdzewski
Hi, first of all, check the workload you like to have on the filesystem if you plan to migrate an old one do some proper performance-testing of the old storage. the io500 can give some ideas https://www.vi4io.org/io500/start but it depends on the use-case of the filesystem cheers, Ansgar Am Fr.

[ceph-users] Re: Can we deprecate FileStore in Quincy?

2021-06-03 Thread Ansgar Jazdzewski
Hi folks, I'm fine with dropping Filestore in the R release! Only one thing to add is: please add a warning to all versions we can upgrade from to the R release son not only Quincy but also pacific! Thanks, Ansgar Neha Ojha schrieb am Di., 1. Juni 2021, 21:24: > Hello everyone, > > Given that

[ceph-users] Re: RadosGW unable to start resharding

2021-03-10 Thread Ansgar Jazdzewski
., 10. März 2021 um 12:44 Uhr schrieb Ansgar Jazdzewski : > > Hi, > > Both commands did not come back with any output after 30min > > I found that people have had run: > radosgw-admin reshard cancel --tenant="..." --bucket="..." > --uid="..." -

[ceph-users] Re: RadosGW unable to start resharding

2021-03-10 Thread Ansgar Jazdzewski
e Thanks, Ansgar Am Mi., 10. März 2021 um 10:55 Uhr schrieb Konstantin Shalygin : > > Try to look at: > radosgw-admin reshard stale-instances list > > Then: > radosgw-admin reshard stale-instances rm > > > > k > > On 10 Mar 2021, at 12:11, Ansgar Jazdzewski

[ceph-users] RadosGW unable to start resharding

2021-03-10 Thread Ansgar Jazdzewski
Hi Folks, We are running ceph 14.2.16 and I like to reshard a bucket because I have a large object warning! so I did: radosgw-admin bucket reshard --tenant="..." --bucket="..." --uid="..." --num-shards=512 but I got receive an error: ERROR: the bucket is currently undergoing resharding and canno

[ceph-users] Re: Question about expansion existing Ceph cluster - adding OSDs

2020-10-21 Thread Ansgar Jazdzewski
Hi, You can make use of the upmap so you do not need to rebalance the entire crush map every time you change the weight. https://docs.ceph.com/en/latest/rados/operations/upmap/ Hope it helps, Ansgar Kristof Coucke schrieb am Mi., 21. Okt. 2020, 13:29: > Hi, > > I have a cluster with 182 OS

[ceph-users] Re: Radosgw Multiside Sync

2020-08-14 Thread Ansgar Jazdzewski
ew group/flow/pipe for each tenant? Thanks, Ansgar Am Fr., 14. Aug. 2020 um 16:59 Uhr schrieb Ansgar Jazdzewski : > > Hi, > > > As I can understand, we are talking about Ceph 15.2.x Octopus, right? > > Yes i'am on ceph 15.2.4 > > > What is the number of zones/realm

[ceph-users] Re: Radosgw Multiside Sync

2020-08-14 Thread Ansgar Jazdzewski
Hi, > As I can understand, we are talking about Ceph 15.2.x Octopus, right? Yes i'am on ceph 15.2.4 > What is the number of zones/realms/zonegroups? ATM i run just a small test on my local machine one zonegroup (global) with a zone node01 and node02 als just one realm > Is Ceph healthy? (ceph

[ceph-users] Radosgw Multiside Sync

2020-08-14 Thread Ansgar Jazdzewski
Hi Folks, i'am trying to move from our own custom bucket synchronization to the rados-gateway build in one. Multisite setup is working https://docs.ceph.com/docs/master/radosgw/multisite/ All buckes and users are visible in both clusters Next i tried to setup the multi-side-sync https://docs.cep