same cluster configuration on multiple
clusters? Or is this approach not aligned with cephadm and we should do it
different way?
Kamil Madac
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
lish rbd-mirror?
Thank you very much for any advice.
Kamil Madac
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Ilya, Thanks for clarification.
On Thu, May 4, 2023 at 1:12 PM Ilya Dryomov wrote:
> On Thu, May 4, 2023 at 11:27 AM Kamil Madac wrote:
> >
> > Thanks for the info.
> >
> > As a solution we used rbd-nbd which works fine without any issues. If we
> will have ti
4:06 PM Ilya Dryomov wrote:
> On Wed, May 3, 2023 at 11:24 AM Kamil Madac wrote:
> >
> > Hi,
> >
> > We deployed pacific cluster 16.2.12 with cephadm. We experience following
> > error during rbd map:
> >
> > [Wed May 3 08:59:11 2023] libceph: mo
f-4f78-96c8-8ec4e4f78a01
Thank you.
--
Kamil Madac
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
bug_rgw = 20) on a non
> public port
> - test against this endpoint and check logs
>
> This might give you more insight.
>
> Am Fr., 31. März 2023 um 09:36 Uhr schrieb Kamil Madac <
> kamil.ma...@gmail.com>:
>
>> We checked s3cmd --debug and endpoint is ok (Working w
check s3cmd --debug if you are connecting to the correct endpoint?
>
> Also I see that the user seems to not be allowed to create bukets
> ...
> "max_buckets": 0,
> ...
>
> Cheers
> Boris
>
> Am Do., 30. März 2023 um 17:43 Uhr schrieb Kamil Madac <
>
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
&q
Hi,
One of my customers had a correctly working RGW cluster with two zones in one
zonegroup and since a few days ago users are not able to create buckets and are
always getting Access denied. Working with existing buckets works (like
listing/putting objects into existing bucket). The only opera
prod-ba-hq)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 6067eec6-a930-45c7-af7d-a7ef2785a2d7 (solargis-prod-ba-dc)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
--
K
prod-ba-hq)
metadata sync syncing
full sync: 0/64 shards
incremental sync: 64/64 shards
metadata is caught up with master
data sync source: 6067eec6-a930-45c7-af7d-a7ef2785a2d7 (solargis-prod-ba-dc)
syncing
full sync: 0/128 shards
incremental sync: 128/128 shards
data is caught up with source
--
K
round of tests in
our testlab.
Kamil
On Mon, Nov 14, 2022 at 3:05 PM Christian Rohmann <
christian.rohm...@inovex.de> wrote:
> Hey Kamil
>
> On 14/11/2022 13:54, Kamil Madac wrote:
> > Hello,
> >
> > I'm trying to create a RGW Zonegroup with two zones, and t
When node is back again, replication continue to work.
What is the reason to have possibility to have multiple endpoints in the
zone configuration when outage of one of them makes replication not
working?
Thank you.
Kamil Madac
___
ceph-users mailing
default value. No
health warning about high cache consumption is generated.
Is that known behavior, and can it be solved by some reconfiguration?
Can someone give us a hint on what to check, debug or tune?
Thank you.
Kamil Madac
___
ceph-users mailing
14 matches
Mail list logo