gt; On Wed, Feb 12, 2025 at 2:19 PM Paul JURCO wrote:
>>>
>>> Same here, it worked only after rgw service was restarted using this config:
>>> rgw_log_http_headers http_x_forwarded_for
>>>
>>> --
>>> Paul Jurco
>>>
>>>
Hi folks,
i like to make sure that the RadosGW is using the X-Forwarded-For as
source-ip for ACL's
However i do not find the information in the logs:
i have set (using beast):
ceph config set global rgw_remote_addr_param http_x_forwarded_for
ceph config set global rgw_log_http_headers http_x_forw
Hi folks,
I found countless questions but no real solution on how to have
multiple subusers and buckets in one account while limiting access to
a bucket to just one specific subuser.
Here’s how I managed to make it work:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyA
s a bug in the LRC plugin.
> But there hasn't been any response yet.
>
> [1] https://tracker.ceph.com/issues/61861
>
> Zitat von Ansgar Jazdzewski :
>
> > hi folks,
> >
> > I currently test erasure-code-lrc (1) in a multi-room multi-rack setup.
> > The
Hi Folks,
We are excited to announce plans for building a larger Ceph-S3 setup.
To ensure its success, extensive testing is needed in advance.
Some of these tests don't need a full-blown Ceph cluster on hardware
but still require meeting specific logical requirements, such as a
multi-site S3 setu
ns_name dev.s3.localhost
```
Am Mi., 21. Feb. 2024 um 17:34 Uhr schrieb Ansgar Jazdzewski
:
>
> Hi folks,
>
> i just try to setup a new ceph s3 multisite-setup and it looks to me
> that dns-style s3 is broken in multi-side as wehn rgw_dns_name is
> configured the `radosgw-admin pe
Hi folks,
i just try to setup a new ceph s3 multisite-setup and it looks to me
that dns-style s3 is broken in multi-side as wehn rgw_dns_name is
configured the `radosgw-admin period update -commit`from the new mebe
will not succeeded!
it looks like when ever hostnames is configured it brakes on t
hi folks,
I currently test erasure-code-lrc (1) in a multi-room multi-rack setup.
The idea is to be able to repair a disk-failures within the rack
itself to lower bandwidth-usage
```bash
ceph osd erasure-code-profile set lrc_hdd \
plugin=lrc \
crush-root=default \
crush-locality=rack \
crush-fail
Hi folks,
I did a little testing with the persistent write-back cache (*1) we
run ceph quincy 17.2.1 qemu 6.2.0
rbd.fio works with the cache, but as soon as we start we get something like
error: internal error: process exited while connecting to monitor:
Failed to open module: /usr/lib/x86_64-li
~# rados listsnaps 200020744f4. -p $POOL
> 200020744f4.:
> cloneidsnapssizeoverlap
> 110[]
> head-0
>
>
> Is it save to assume that these objects belong to a somewhat broken snapshot
> and can be removed safely without causin
naffected?
>
> Is there any way to validate that theory? I am a bit hesitant to just
> run "rmsnap". Could that cause inconsistent data to be written back to
> the actual objects?
>
>
> Best regards,
>
> Pascal
>
>
>
> Ansgar Jazdzewski wrote on 23.06.22
Hi Pascal,
We just had a similar situation on our RBD and had found some bad data
in RADOS here is How we did it:
for i in $(rados list-inconsistent-pg $POOL | jq -er .[]); do rados
list-inconsistent-obj $i | jq -er .inconsistents[].object.name| awk
-F'.' '{print $2}'; done
we than found inconsi
o
Ansgar
Am Mo., 10. Jan. 2022 um 14:52 Uhr schrieb Ansgar Jazdzewski
:
>
> Hi folks,
>
> i try to get dns-style buckets running and stumbled across an issue with
> tenants
>
> I can access the bucket like https://s3.domain/: but I
> did not find a way to do it with DNS-Sty
Hi folks,
i try to get dns-style buckets running and stumbled across an issue with tenants
I can access the bucket like https://s3.domain/: but I
did not find a way to do it with DNS-Style something like that
https://_.s3.domain !
Do I miss something in the documentation?
Thanks for your help!
> IIRC you get a HEALTH_WARN message that there are OSDs with old metadata
> format. You can suppress that warning, but I guess operators feel like
> they want to deal with the situation and get it fixed rather than ignore it.
Yes, and if suppress the waning gets forgotten you run into other
issue
Am Di., 9. Nov. 2021 um 11:08 Uhr schrieb Dan van der Ster
:
>
> Hi Ansgar,
>
> To clarify the messaging or docs, could you say where you learned that
> you should enable the bluestore_fsck_quick_fix_on_mount setting? Is
> that documented somewhere, or did you have it enabled from previously?
> Th
Hi fellow ceph users,
I did an upgrade from 14.2.23 to 16.2.6 not knowing that the current
minor version had this nasty bug! [1] [2]
we were able to resolve some of the omap issues in the rgw.index pool
but still have 17pg's to fix in the rgw.meta and rgw.log pool!
I have a couple of questions:
Hi Folks,
We had to delete some unfound objects in our cache to get our cluster
back working! but after an hour we see OSD's crash
we found that it is caused by the fact that we deleted the:
"hit_set_8.3fc_archive_2021-09-09 08:25:58.520768Z_2021-09-09
08:26:18.907234Z" Object
Crash-Log can be
Hi,
so yes I was assuming that the new mon is a member of the cluster, so
packages are installed and ceph.conf in place!
You also need to add the IP of the new mon to the ceph.conf when you
are done and redistribute it to all members of the cluster.
Ansgar
Am Do., 19. Aug. 2021 um 15:30 Uhr schr
Hi,
Am Do., 19. Aug. 2021 um 14:57 Uhr schrieb Francesco Piraneo G.
:
>
>
> >mkdir /var/lib/ceph/mon/ceph-$(hostname -s)
>
> This has to be done on new host, right?
Yes
> >ceph auth get mon. -o /tmp/mon-keyfile
> >ceph mon getmap -o /tmp/mon-monmap
> This has to be done on the runni
Hi Francesco,
in short you need to do this:
mkdir /var/lib/ceph/mon/ceph-$(hostname -s)
ceph auth get mon. -o /tmp/mon-keyfile
ceph mon getmap -o /tmp/mon-monmap
ceph-mon -i $(hostname -s) --mkfs --monmap /tmp/mon-monmap --keyring
/tmp/mon-keyfile
chown -R ceph: /var/lib/ceph/mon/ceph-$
When did it leave the cluster?
>
> > I also found that the rocksdb on osd01 is only 1MB in size and 345MB on the
> > other mons!
>
> It sounds like mon.osd01's db has been re-initialized as empty, e.g.
> maybe the directory was lost somehow between reboots?
>
&g
rst failed before the
> on-call team rebooted it? They might give a clue what happened to
> start this problem, which maybe is still happening now.
>
> This looks similar but it was eventually found to be a network issue:
> https://tracker.ceph.com/issues/48033
>
> -- Dan
>
>
16:28:43.418 7fcc613d8700 10 mon.osd01@0(probing) e1
cancel_probe_timeout (none scheduled)
2021-07-25 16:28:43.418 7fcc613d8700 10 mon.osd01@0(probing) e1
reset_probe_timeout 0x55c6b3553260 after 2 seconds
still looks like a connection issue but I can connect! using telnet
root@osd01:~# tel
Am So., 25. Juli 2021 um 17:17 Uhr schrieb Dan van der Ster
:
>
> > raise the min version to nautilus
>
> Are you referring to the min osd version or the min client version?
yes sorry was not written clearly
> I don't think the latter will help.
>
> Are you sure that mon.osd01 can reach those oth
ould:
>
> 1. Investigate why mon.osd01 isn't coming back into the quorum... The logs on
> that mon or the others can help.
> 2. If you decide to give up on mon.osd01, then first you should rm it from
> the cluster before you add a mon from another host.
>
> .. Dan
>
&
hi folks
I have a cluster running ceph 14.2.22 on ubuntu 18.04 and some hours
ago one of the mons stopped working and the on-call team rebooted the
node; not the mon is is not joining the ceph-cluster.
TCP ports of mons are open and reachable!
ceph health detail
HEALTH_WARN 1/3 mons down, quorum
Hi,
I would do an extra network / VLAN mostly for security reasons, also
take a look at CTDB for samba failover.
Have a nice Weekend,
Ansgar
Am Fr., 11. Juni 2021 um 08:21 Uhr schrieb Götz Reinicke
:
>
> Hi all
>
> We get a new samba smb fileserver who mounts our cephfs for exporting some
> sha
Hi,
first of all, check the workload you like to have on the filesystem if
you plan to migrate an old one do some proper performance-testing of
the old storage.
the io500 can give some ideas https://www.vi4io.org/io500/start but it
depends on the use-case of the filesystem
cheers,
Ansgar
Am Fr.
Hi folks,
I'm fine with dropping Filestore in the R release!
Only one thing to add is: please add a warning to all versions we can
upgrade from to the R release son not only Quincy but also pacific!
Thanks,
Ansgar
Neha Ojha schrieb am Di., 1. Juni 2021, 21:24:
> Hello everyone,
>
> Given that
., 10. März 2021 um 12:44 Uhr schrieb Ansgar Jazdzewski
:
>
> Hi,
>
> Both commands did not come back with any output after 30min
>
> I found that people have had run:
> radosgw-admin reshard cancel --tenant="..." --bucket="..."
> --uid="..." -
e
Thanks,
Ansgar
Am Mi., 10. März 2021 um 10:55 Uhr schrieb Konstantin Shalygin :
>
> Try to look at:
> radosgw-admin reshard stale-instances list
>
> Then:
> radosgw-admin reshard stale-instances rm
>
>
>
> k
>
> On 10 Mar 2021, at 12:11, Ansgar Jazdzewski
Hi Folks,
We are running ceph 14.2.16 and I like to reshard a bucket because I
have a large object warning!
so I did:
radosgw-admin bucket reshard --tenant="..." --bucket="..." --uid="..."
--num-shards=512
but I got receive an error:
ERROR: the bucket is currently undergoing resharding and canno
Hi,
You can make use of the upmap so you do not need to rebalance the entire
crush map every time you change the weight.
https://docs.ceph.com/en/latest/rados/operations/upmap/
Hope it helps,
Ansgar
Kristof Coucke schrieb am Mi., 21. Okt. 2020,
13:29:
> Hi,
>
> I have a cluster with 182 OS
ew group/flow/pipe for each tenant?
Thanks,
Ansgar
Am Fr., 14. Aug. 2020 um 16:59 Uhr schrieb Ansgar Jazdzewski
:
>
> Hi,
>
> > As I can understand, we are talking about Ceph 15.2.x Octopus, right?
>
> Yes i'am on ceph 15.2.4
>
> > What is the number of zones/realm
Hi,
> As I can understand, we are talking about Ceph 15.2.x Octopus, right?
Yes i'am on ceph 15.2.4
> What is the number of zones/realms/zonegroups?
ATM i run just a small test on my local machine one zonegroup (global)
with a zone node01 and node02 als just one realm
> Is Ceph healthy? (ceph
Hi Folks,
i'am trying to move from our own custom bucket synchronization to the
rados-gateway build in one.
Multisite setup is working https://docs.ceph.com/docs/master/radosgw/multisite/
All buckes and users are visible in both clusters
Next i tried to setup the multi-side-sync
https://docs.cep
37 matches
Mail list logo