bucket to the new
pool and do a radosgw-admin bucket rewrite. Or are there other ways?
Would that work? Does someone have experience with it?
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
to be backported into Tentacle. Maybe Adam or Redo can
>> confirm?
>>
>> Thanks,
>>
>> Regards.
>>
>> On Mon, Jul 7, 2025 at 2:10 PM Boris wrote:
>>
>>> Hi Daniel,
>>>
>>> do you know when this will be released?
>>&
Hi Daniel,
do you know when this will be released?
I can not find the change in the changelogs for reef or squid, and I
updated the certificate/key with
ceph config-key set rgw/cert/rgw.s3-poc-boris -i
/etc/letsencrypt/data/s3-poc-boris_ecc/s3-poc-boris.pem (the pem contains
cert and key
Hi,
is there a way to reload the ceritificate in rgw without downtime? Or if I
have multiple rgw daemons to do it one by one and wait for the last one to
be active again?
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_
http-request set-header x-amz-server-side-encryption AES256 if
!existing-x-amz-server-side-encryption METH_PUT or
!existing-x-amz-server-side-encryption METH_POST
http-request add-header X-Forwarded-Proto https if { ssl_fc }
Am Fr., 6. Juni 2025 um 12:51 Uhr schrieb Boris :
> So, I
Hi Albert,
you can use puppet to set ceph conf commands.
# Disable RadosGW access logs
exec { 'set-debug_rgw_access':
command => 'ceph config set client.rgw debug_rgw_access 0',
unless => 'ceph config get client.rgw debug_rgw_access | grep -w 0',
path=> ['/usr/bin', '/bin'],
not to
policies, multipart listings and so on.
Cheers
Boris
Am Do., 5. Juni 2025 um 13:18 Uhr schrieb Boris :
> This is a follow up question to the sse-kms thread, because the KMS team
> is now working on the transit engine and we will POC with openbao.
>
> Is there a way to enforce the
enforce that, I could a header check in the haproxy
to make it transparent for the user.
cheers
Boris
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
___
ceph-users mailing list -- ceph-users@
-kms and sadly not sse-s3.
This is why I need to force the end user to use the KMS stuff.
Cheers
Boris
Am Mi., 4. Juni 2025 um 17:45 Uhr schrieb Michael Worsham <
mwors...@datadimensions.com>:
> We use Hashicorp Vault with the SSE-S3 and the transit engine with Vault
> handling the
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
So this is how we are doing it and it is working very well:
- Put the index and metadata pools on fast NVMEs. The usage of our flash
disks is at 1% disk space.
- Only have the data pool on spinners
- For every 5 HDDs we have 1 NVME that is serving the OSD metadata.
- After the reef upgrade we enab
to use it and how it will work.
But how to I prevent the user from uploading unecrypted objects?
Do I check for a header in the proxy and return a uhuhuh, you didn't
say the magic word! when there specific header is missing? And if this
is the way, is there a shema I need to stick to?
-
or now and call it fixed :)
Thanks anyway for your input.
- Boris
Am Fr., 16. Mai 2025 um 17:13 Uhr schrieb Robin H. Johnson <
robb...@gentoo.org>:
> On Thu, May 15, 2025 at 02:06:49PM +0200, Boris wrote:
> > Hi,
> > I am in the process of checking orphan objects and the rados
uot;
}
# radosgw-admin bucket list --uid 3853f960-6d72-4760-95ea-5e8f2a571840
[]
How can I find the users data, to verify if this can be removed?
Cheers
Boris
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
_
Hi,
I am in the process of checking orphan objects and the radoslist list grep
over 300GB while the rados ls list was only 50GB.
After some investigation I identified a saw that one bucket fell out with a
lot of objects in the radoslist part.
I did some sorting on the output file and just used th
, which results in a week until it is
finished).
Am Di., 13. Mai 2025 um 15:15 Uhr schrieb Enrico Bocchi <
enrico.boc...@cern.ch>:
> Hi Boris,
>
> We have experienced PGs going laggy in the past with a lower level rados
> command to list omapkeys from the BI objects in the metadata
but it feels like the
bucket is somehow broken, because deleting is now working as I expect it.
I will try to reshard the bucket tonight and hope it will work out. The
explanation from Casey sounds promising.
As I have a lot more buckets with a lot more objects (according to the
bucket index) this nee
and the master of a multisite which only
replicates the metadata (1 realm, multiple zonegroups, one zone per zonegroup).
Any ideas what I can do?
I fear to reshard the bucket, because I am not sure if I can stop the
resharding if the PGs become laggy.
Cheers
Bo
xxx
secret_key = yyy
website_endpoint = https://%(bucket)s.s3-website-zg-b.test.dev
Is this a bug or expected behavior.
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
e the during the long
running snap trim the latency of the cluster slowly degrades.
Am Mo., 5. Mai 2025 um 13:50 Uhr schrieb Boris :
> So I went through some other clusters we have and it always shows
> basically the same pattern.
>
> One PG with disproportional large OMAP data. These a
PORTED
2.13f 425 27 active+clean 46780'456499 46828:588064
2.1e2 427 27 active+clean 46588'351645 46828:492723
2.1c 11718410 active+clean 46828'1366432 46828:4124268
Am Mo., 5. Mai 2025 um 10:00 Uhr schrieb Boris :
> The m
#x27;625068112 8935596:699142847
3.c1c 45024581632 263621 9078active+clean+scrubbing+deep
8935596'371406432 8935596:580119804
Am Mo., 5. Mai 2025 um 09:27 Uhr schrieb Boris :
> Hi,
>
> we have one cluster where one PG that seems to be snaptrimming forever.
> It so
And the constant factor is always PG 3.c1c.
Anyone got an idea what I can do? Is it possible to "sync out and recreate"
a PG, like I can do with an OSD?
Here is some output from the pg list
Cheers
Boris
# date; ceph pg ls| grep snaptrim | awk '{print $1, $11, $12, $15}' |
c
Hi,
is it possible to have the crush rules, the one you get from ceph osd
crush rule dump, and which pool uses which crush rule into the prometheus
exporter?
It would help us to monitor this.
Cheers
Boris
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend i
the master, but it is still listed in all other
sites.
I checked the current period on all sites and they are all the same, but
the deleted zonegroup is still in the radosgw-admin period get on all
zonegroups that are not the master.
How do I proceed?
Best wishes
Boris
--
Die Selbsthilfegruppe
also leads to a better distribution.
Cheers
Boris
> Am 27.12.2024 um 05:09 schrieb Gerard Hand :
>
> Just to add a bit of additional information:
> - I have tried failing the active MGR but that appeared to make no difference.
> - I found restarting the primary OSD on a PG th
Hi Istvan,
You can list your rgw daemons with the following command
> ceph service dump -f json-pretty | jq '.services.rgw.daemons'
The following command extract all their ids
> ceph service dump -f json-pretty | jq '.services.rgw.daemons' | egrep -e
> 'gid&
Didn't you already got the answer from the reddit thread?
https://www.reddit.com/r/ceph/comments/1f88u6m/prefered_distro_for_ceph/
I always point here:
https://docs.ceph.com/en/latest/start/os-recommendations/ and we are
running very well with Ubuntu with, and without, the orchestrator.
Am Do., 5
Added a bug for it. https://tracker.ceph.com/issues/67889
Am Mi., 4. Sept. 2024 um 11:31 Uhr schrieb Boris :
> I think I've found what it is.
>
> I you just call the rgw, without any bucket name or authentication you
> will end up with these logs.
> Now is the question, is
I think I've found what it is.
I you just call the rgw, without any bucket name or authentication you will
end up with these logs.
Now is the question, is this a bug, because you can not read it? And how
can I disable it?
Cheers
Boris
Am Di., 3. Sept. 2024 um 14:12 Uhr schrieb Boris :
&
I am not sure how I should describe this.
We enable ops log and have these entries here together with the normal ops
logs.
# radosgw-admin log list | grep ^.*--
"2024-09-03-12--",
I did a head on the file and got this:
B��f�� anonymous
list_bucketsHEAD / HTTP/1.0200
;tx0
I just don't want to waste any performance and in my opinion SED can do it
transparently, because they already do it all the time, but the encryption
key is by default not secured with a password. At least that is how I
understood it. Doing dmcrypt still goes through the CPU IIRC.
But basically I
Hi,
is it possible to somehow distinguish self encrypting drives from drives
that are lacking the support in the orchestrator.
So I don't encrypt them twice :)
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
I tried it with the offline compactation, and it didn't help a bit.
It took ages per OSD and starting the OSD afterwards wasn't fast either.
> Am 23.08.2024 um 18:16 schrieb Özkan Göksu :
>
> I have 12+12 = 24 servers with 8 x 4TB SAS SSD on each node.
> I will use the weekend and I will st
very well for us.
Am Fr., 23. Aug. 2024 um 18:08 Uhr schrieb Özkan Göksu :
> Hello Boris. Thank you so much for the recommendation.
>
> Actually my intention is jumping next releases until I reach Quincy.
> What do you think about this? Am I gonna face with any problem while
>
Good to know. Everything is bluestore and usually 5 spinners share an SSD
for block.db.
Memory should not be a problem. We plan with 4GB / OSD with a minimum of
256GB memory.
The pirmary affinity is a nice idea. I only thought about it in our s3
cluster, because the index is on SAS AND SATA SSDs a
:)
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
om the internet. If in doubt, don't click any link nor open
> any attachment !
>
>
> Hi,
>
> how big are those PGs? If they're huge and are deep-scrubbed, for
> example, that can cause significant delays. I usually look at 'ceph
I have to admit: all the Spongebob references are kinda cool :D
Am Do., 15. Aug. 2024 um 15:05 Uhr schrieb Patrick Donnelly <
pdonn...@redhat.com>:
> On Thu, Aug 15, 2024 at 8:29 AM Alfredo Rezinovsky
> wrote:
> >
> > I think is a very bad idea to name a release with the name of the most
> > pop
PGs are roughtly 35GB.
Am Mi., 14. Aug. 2024 um 09:25 Uhr schrieb Eugen Block :
> Hi,
>
> how big are those PGs? If they're huge and are deep-scrubbed, for
> example, that can cause significant delays. I usually look at 'ceph pg
> ls-by-pool {pool}' and the "
r us compacting
> blocking the operation on it.
>
> Istvan
> ------
> *From:* Boris
> *Sent:* Saturday, August 10, 2024 5:30:54 PM
> *To:* ceph-users@ceph.io
> *Subject:* [ceph-users] Identify laggy PGs
>
> Email received from the internet. If
Hi,
currently we encouter laggy PGs and I would like to find out what is
causing it.
I suspect it might be one or more failing OSDs. We had flapping OSDs and I
synced one out, which helped with the flapping, but it doesn't help with
the laggy ones.
Any tooling to identify or count PG performance
Hi Fabien,
additional to what Anthony said you could do the following:
- `ceph osd set nobackfill` to disable initial backfilling
- `ceph config set osd osd_mclock_override_recovery_settings true` to
override the mclock sheduler backfill settings
- Let the orchestrator add one host each time. I w
.
So here are my questions:
- How do I backup the lockbox secrets?
- Do I need to backup the whole mon data, and if so how can I do it?
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Ah nice.
Thanks a lot :)
Am Mi., 26. Juni 2024 um 11:56 Uhr schrieb Robert Sander <
r.san...@heinlein-support.de>:
> Hi,
>
> On 6/26/24 11:49, Boris wrote:
>
> > Is there a way to only update 1 daemon at a time?
>
> You can use the feature "staggered upgrade&
ay to only update 1 daemon at a time?
Cheers
Boris
--
Die Selbsthilfegruppe "UTF-8-Probleme" trifft sich diesmal abweichend im
groüen Saal.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
.
Am Fr., 24. Mai 2024 um 14:19 Uhr schrieb Boris :
> Hi,
>
> we are currently in the process of adopting the main s3 cluster to
> orchestrator.
> We have two realms (one for us and one for the customer).
>
> The old config worked fine and depending on the port I requeste
Hi,
we are currently in the process of adopting the main s3 cluster to
orchestrator.
We have two realms (one for us and one for the customer).
The old config worked fine and depending on the port I requested, I got
different x-amz-request-id header back:
x-amz-request-id: tx0307170ac0d734ab4-
Hi Tobias,
what we usually do, when we want to remove an OSD is to reweight the
crush map to 0. This stops the rebalancing after removing the OSD from
the crush map. Setting an OSD to out, keeps it weighted in the crush
map and when it gets removed, the cluster will rebalance the PGs to
reflect th
Good morning Eugen,
I just found this thread and saw that I had a test image for rgw in the
config.
After removing the global and the rgw config value everything was instantly
fine.
Cheers and a happy week
Boris
Am Di., 16. Jan. 2024 um 10:20 Uhr schrieb Eugen Block :
> Hi,
>
> t
/troubleshooting/#ssh-errors)
The logs always show a message like "took the task" but then nothing
happens.
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
did when we were hit with this issue:
Stop snaptrim, update to pacific, do an offline rocksdb compression before the
OSDs start after the upgrade, start the OSDs and hate our lives while they
started, wait a week, slowly start the snaptrim and hope for the best. :-)
Mit freundlichen Grüßen
-
Hi Peter,
try to set the cluster to nosnaptrim
If this helps, you might need to upgrade to pacific, because you are hit by the
pg dups bug.
See: https://www.clyso.com/blog/how-to-identify-osds-affected-by-pg-dup-bug/
Mit freundlichen Grüßen
- Boris Behrens
> Am 06.12.2023 um 19:01 schr
n a set of RGW instances.
So long story short:
What are your easy setups to serve public RGW traffic with some sort of HA
and LB (without using a big HW LB that is capable of 100GBit traffic)?
And have you experienced problems when you do not shift around IP addresses.
Che
You can mute it with
"ceph health mute ALERT"
where alert is the caps keyword from "ceph health detail"
But I would update asap.
Cheers
Boris
> Am 08.11.2023 um 02:02 schrieb zxcs :
>
> Hi, Experts,
>
> we have a ceph cluster report HEALTH_ERR due to mu
orchestrator environment)
Good luck.
Am Do., 2. Nov. 2023 um 12:48 Uhr schrieb Mohamed LAMDAOUAR <
mohamed.lamdao...@enyx.fr>:
> Hello Boris,
>
> I have one server monitor up and two other servers of the cluster are also
> up (These two servers are not monitors ) .
> I have four ot
Hi Mohamed,
are all mons down, or do you still have at least one that is running?
AFAIK: the mons save their DB on the normal OS disks, and not within the
ceph cluster.
So if all mons are dead, which mean the disks which contained the mon data
are unrecoverable dead, you might need to bootstrap a
Hi Dan,
we are currently moving all the logging into lua scripts, so it is not an
issue anymore for us.
Thanks
ps: the ceph analyzer is really cool. plusplus
Am Sa., 28. Okt. 2023 um 22:03 Uhr schrieb Dan van der Ster <
dan.vanders...@clyso.com>:
> Hi Boris,
>
> I found that
Hi,
did someone have a solution ready to monitor traffic by IP address?
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
bug
> reports to improve it.
>
> Zitat von Boris Behrens :
>
> > Hi,
> > I've just upgraded to our object storages to the latest pacific version
> > (16.2.14) and the autscaler is acting weird.
> > On one cluster it just shows nothing:
> > ~# ceph osd poo
Also found what the 2nd problem was:
When there are pools using the default replicated_ruleset while there are
multiple rulesets with differenct device classes, the autoscaler does not
produce any output.
Should I open a bug for that?
Am Mi., 4. Okt. 2023 um 14:36 Uhr schrieb Boris Behrens
Found the bug for the TOO_MANY_PGS: https://tracker.ceph.com/issues/62986
But I am still not sure, why I don't have any output on that one cluster.
Am Mi., 4. Okt. 2023 um 14:08 Uhr schrieb Boris Behrens :
> Hi,
> I've just upgraded to our object storages to the latest pacific ve
Hi,
I've just upgraded to our object storages to the latest pacific version
(16.2.14) and the autscaler is acting weird.
On one cluster it just shows nothing:
~# ceph osd pool autoscale-status
~#
On the other clusters it shows this when it is set to warn:
~# ceph health detail
...
[WRN] POOL_TOO_M
mand keys, that get removed
when the RGW shuts down.
I don't want to use the orchestrator for this, because I would need to add
all the compute nodes to it and there might be other processes in place
that add FW rules in our provisioning.
Cheers
Boris
__
I have a use case where I want to only use a small portion of the disk for
the OSD and the documentation states that I can use
data_allocation_fraction [1]
But cephadm can not use this and throws this error:
/usr/bin/podman: stderr ceph-volume lvm batch: error: unrecognized
arguments: --data-alloc
-de57f40ba853.target;
enabled; vendor preset: enabled)
Active: inactive (dead)
Am Sa., 16. Sept. 2023 um 13:29 Uhr schrieb Boris :
> The other hosts are still online and the cluster only lost 1/3 of its
> services.
>
>
>
> > Am 16.09.2023 um 12:53 schrieb Eugen Block :
r
> daemons are down. The orchestrator is a mgr module, so that’s a bit weird,
> isn’t it?
>
> Zitat von Boris Behrens :
>
>> Hi Eugen,
>> the test-test cluster where we started with simple ceph and the adoption
>> when straight forward are working fine.
>&g
pods. This process also should have been logged
> (stdout, probably in the cephadm.log as well), resulting in "enabled"
> systemd units. Can you paste the output of 'systemctl status
> ceph-@mon.'? If you have it, please also share the logs
> from the adoption proc
test cluster and there the pods start very fast. On the
legacy test cluster, which got adopted to cephadm, it does not.
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
none
*
global advanced auth_service_required
none
Am Fr., 15. Sept. 2023 um 13:01 Uhr schrieb Boris Behrens :
> Oh, we found the issue. A very old update was stuck in the pipeline. We
> canceled it and then the correct images got
700 -1 mon.0cc47a6df330@-1(probing)
e0 handle_auth_bad_method hmm, they didn't like 2 result (95) Operation not
supported
I added the mon via:
ceph orch daemon add mon FQDN:[IPv6_address]
Am Fr., 15. Sept. 2023 um 09:21 Uhr schrieb Boris Behrens :
> Hi Stefan,
>
> the cluster is ru
start fresh
after reinstalling the hosts, but as I have to adopt 17 clusters to the
orchestrator, I rather get some learnings from the not working thing :)
Am Fr., 15. Sept. 2023 um 08:26 Uhr schrieb Stefan Kooman :
> On 14-09-2023 17:49, Boris Behrens wrote:
> > Hi,
> > I c
Hi,
I currently try to adopt our stage cluster, some hosts just pull strange
images.
root@0cc47a6df330:/var/lib/containers/storage/overlay-images# podman ps
CONTAINER ID IMAGE COMMAND
CREATEDSTATUSPORTS NAMES
a532c37ebe42
eachability of both: old and new network, until end of migration
>
> k
> Sent from my iPhone
>
> > On 22 Aug 2023, at 10:43, Boris Behrens wrote:
> >
> > The OSDs are still only bound to one IP address.
>
>
--
Die Selbsthilfegruppe
> I'm not aware of a way to have them bind to multiple public IPs like
> the MONs can. You'll probably need to route the compute node traffic
> towards the new network. Please correct me if I misunderstood your
> response.
>
> Zitat von Boris Behrens :
>
> > T
nse to have both old and new network in there, but I'd try on one
> host first and see if it works.
>
> Zitat von Boris Behrens :
>
> > We're working on the migration to cephadm, but it requires some
> > prerequisites that still needs planing.
> >
> >
maintained via cephadm /
> > orchestrator.
>
> I just assumed that with Quincy it already would be managed by
> cephadm. So what does the ceph.conf currently look like on an OSD host
> (mask sensitive data)?
>
> Zitat von Boris Behrens :
>
> > Hey Eugen,
> >
gt; Regards,
> Eugen
>
> [1] https://www.spinics.net/lists/ceph-users/msg75162.html
> [2]
>
> https://docs.ceph.com/en/quincy/cephadm/services/mon/#moving-monitors-to-a-different-network
>
> Zitat von Boris Behrens :
>
> > Hi,
> > I need to migrate a storage cluster to a
Hi,
I need to migrate a storage cluster to a new network.
I added the new network to the ceph config via:
ceph config set global public_network "old_network/64, new_network/64"
I've added a set of new mon daemons with IP addresses in the new network
and they are added to the quorum and seem to wor
s are
okayish but ugly :D ).
And because of the bug, we went another route with the last cluster.
I reinstalled all hosts with ubuntu 18.04, then update straight to pacific,
and then upgrade to ubuntu 20.04.
Hope that helped.
Cheers
Boris
Am Di., 1. Aug. 2023 um 20:06 Uhr schrieb Götz Rei
Are there any ideas how to work with this?
We disabled the logging so we do not run our of diskspace, but the rgw
daemon still requires A LOT of cpu because of this.
Am Mi., 21. Juni 2023 um 10:45 Uhr schrieb Boris Behrens :
> I've update the dc3 site from octopus to pacific and the pr
Hi Mahnoosh,
that helped. Thanks a lot!
Am Mo., 3. Juli 2023 um 13:46 Uhr schrieb mahnoosh shahidi <
mahnooosh@gmail.com>:
> Hi Boris,
>
> You can list your rgw daemons with the following command
>
> ceph service dump -f json-pretty | jq '.services.rgw.daemons
but we are not at
the stage that we are going to implement the orchestrator yet.
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
So basically it does not matter unless I want to have that split up.
Thanks for all the answers.
I am still lobbying to phase out SATA SSDs and replace them with NVME
disks. :)
Am Mi., 28. Juni 2023 um 18:14 Uhr schrieb Anthony D'Atri <
a...@dreamsnake.net>:
> Even when you factor in density, io
es": "nvme0n1",
"distro": "ubuntu",
"distro_description": "Ubuntu 20.04.6 LTS",
"distro_version": "20.04",
...
"journal_rotational": "0",
"kernel_description": "#169-Ubuntu SMP Tue Jun 6 22:23:09 UTC 2023",
"kernel_version": "5.4.0-152-generic",
"mem_swap_kb": "0",
"mem_total_kb": "196668116",
"network_numa_unknown_ifaces": "back_iface,front_iface",
"objectstore_numa_node": "0",
"objectstore_numa_nodes": "0",
"os": "Linux",
"osd_data": "/var/lib/ceph/osd/ceph-0",
"osd_objectstore": "bluestore",
"osdspec_affinity": "",
"rotational": "0"
}
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I've update the dc3 site from octopus to pacific and the problem is still
there.
I find it very weird that in only happens from one single zonegroup to the
master and not from the other two.
Am Mi., 21. Juni 2023 um 01:59 Uhr schrieb Boris Behrens :
> I recreated the site and the probl
c-07e6-463a-861b-78f0adeba8ad.2297866866.29
Am Di., 20. Juni 2023 um 19:29 Uhr schrieb Boris :
> Hi Casey,
> already did restart all RGW instances. Only helped for 2 minutes. We now
> stopped the new site.
>
> I will remove and recreate it later.
> As twi other sites don't have
Hi Casey,
already did restart all RGW instances. Only helped for 2 minutes. We now
stopped the new site.
I will remove and recreate it later.
As twi other sites don't have the problem I currently think I made a mistake in
the process.
Mit freundlichen Grüßen
- Boris Behrens
Hi,
yesterday I added a new zonegroup and it looks like it seems to cycle over
the same requests over and over again.
In the log of the main zone I see these requests:
2023-06-20T09:48:37.979+ 7f8941fb3700 1 beast: 0x7f8a602f3700:
fd00:2380:0:24::136 - - [2023-06-20T09:48:37.979941+] "GET
E:OLD_BUCKET_ID <
bucket.instance:BUCKET_NAME:NEW_BUCKET_ID.json
Am Do., 27. Apr. 2023 um 13:32 Uhr schrieb Boris Behrens :
> To clarify a bit:
> The bucket data is not in the main zonegroup.
> I wanted to start the reshard in the zonegroup where the bucket and the
> data is located, but rgw told me to
les
Am Do., 27. Apr. 2023 um 13:08 Uhr schrieb Boris Behrens :
> Hi,
> I just resharded a bucket on an octopus multisite environment from 11 to
> 101.
>
> I did it on the master zone and it went through very fast.
> But now the index is empty.
>
> The files are still there
Hi,
I just resharded a bucket on an octopus multisite environment from 11 to
101.
I did it on the master zone and it went through very fast.
But now the index is empty.
The files are still there when doing a radosgw-admin bucket radoslist
--bucket-id
Do I just need to wait or do I need to recover
Cheers Dan,
would it be an option to enable the ops log? I still didn't figure out how
it is actually working.
But I am also thinking to move to the logparsing in HAproxy and disable the
access log on the RGW instances.
Am Mi., 26. Apr. 2023 um 18:21 Uhr schrieb Dan van der Ster <
dan.vanders...@
Thanks Janne, I will hand that to the customer.
> Look at https://community.veeam.com/blogs-and-podcasts-57/sobr-veeam
> -capacity-tier-calculations-and-considerations-in-v11-2548
> for "extra large blocks" to make them 8M at least.
> We had one Veeam installation vomit millions of files onto our
ne from the s3cmd/aws
cli standpoint.
Does anyone here ever experienced veeam problems with rgw?
Cheers
Boris
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
I don't think you can exclude that.
We've build a notification in the customer panel that there are incomplete
multipart uploads which will be added as space to the bill. We also added a
button to create a LC policy for these objects.
Am Di., 11. Apr. 2023 um 19:07 Uhr schrieb :
> The radosgw-adm
x_buckets": 1000, and those users have the same access_denied issue
> when creating a bucket.
>
> We also tried other bucket names and it is the same issue.
>
> On Thu, Mar 30, 2023 at 6:28 PM Boris Behrens wrote:
>
>> Hi Kamil,
>> is this with all new buckets o
Hi,
you might suffer from the same bug we suffered:
https://tracker.ceph.com/issues/53729
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KG35GRTN4ZIDWPLJZ5OQOKERUIQT5WQ6/#K45MJ63J37IN2HNAQXVOOT3J6NTXIHCA
Basically there is a bug that prevents the removal of PGlog items. You need
Hi Nicola, can you send the output of
ceph osd df tree
ceph df
?
Cheers
Boris
Am Do., 30. März 2023 um 16:36 Uhr schrieb Nicola Mori :
> Dear Ceph users,
>
> my cluster is made up of 10 old machines, with uneven number of disks and
> disk size. Essentially I have just one big da
Hi Kamil,
is this with all new buckets or only the 'test' bucket? Maybe the name is
already taken?
Can you check s3cmd --debug if you are connecting to the correct endpoint?
Also I see that the user seems to not be allowed to create bukets
...
"max_buckets": 0,
...
Cheer
1 - 100 of 363 matches
Mail list logo