Hi Henning,
On Wed, May 17, 2023 at 9:25 PM Henning Achterrath wrote:
>
> Hi all,
>
> we did a major update from Pacific to Quincy (17.2.5) a month ago
> without any problems.
>
> Now we have tried a minor update from 17.2.5 to 17.2.6 (ceph orch
> upgrade). It stucks at mds upgrade phase. At this
Hi
I'm currently using Ceph version 16.2.7 and facing an issue with bucket
creation in a multi-zone configuration. My setup includes two zone groups:
ZG1 (Master) and ZG2, with one zone in each zone group (zone-1 in ZG1 and
zone-2 in ZG2).
The objective is to create buckets in a specific zone gr
If it works I’d be amazed. We have this slow and limited delete issue also.
What we’ve done to run on the same bucket multiple delete from multiple servers
via s3cmd.
Istvan Szabo
Staff Infrastructure Engineer
---
Agoda Services Co., Ltd.
e: istvan
use this to get relevant long lines in log:
journalctl -u ceph-d0a3b6e0-d2c3-11ed-be05-a7a3a1d7a87e@grafana-fl31ca104ja0201
| less -S
it is '--user 472' by content of unit.run, not the default ceph user
167. Maybe set the directory owner to 472 could help.
Hope it helps
Ben
Adiga, Anantha 于
On Wed, 2023-05-17 at 17:23 +, Marc wrote:
> >
> >
> > In fact, when we start up the cluster, we don't have DNS available
> > to
> > resolve the IP addresses, and for a short while, all OSDs are
> > located
> > in a new host called "localhost.localdomain". At that point, I
> > fixed
> > it b
I have two autofs entries that mount the same cephfs file system to two
different mountpoints. Accessing the first of the two fails with 'stale
file handle'. The second works normally. Other than the name of the
mount point, the lines in autofs are identical. No amount of 'umount
-f' or res
Hi,
I've been following this thread with interest as it seems like a unique use
case to expand my knowledge. I don't use LRC or anything outside basic
erasure coding.
What is your current crush steps rule? I know you made changes since your
first post and had some thoughts I wanted to share, but
Originally we had about a hundred packages in
https://copr.fedorainfracloud.org/coprs/ceph/el9/ before they were
wiped out in rhbz#2143742. I went back over the list of outstanding
deps today. EPEL lacks only five packages now. I've built those into
the Copr today.
You can enable it with "dnf copr
On 5/17/23 18:07, Stefan Kooman wrote:
On 5/17/23 17:29, Conrad Hoffmann wrote:
Hi all,
I'm having difficulties removing a CephFS volume that I set up for
testing. I've been through this with RBDs, so I do know about
`mon_allow_pool_delete`. However, it doesn't help in this case.
It is a cl
On 5/16/23 05:59, Konstantin Shalygin wrote:
Hi Mark!
Thank you very much for this message, acknowledging the problem publicly is the
beginning of fixing it ❤️
Thanks Konstantin! For what it's worth, I think all of these topics
have been discussed publicly (and some quite extensively!) du
>
>
> In fact, when we start up the cluster, we don't have DNS available to
> resolve the IP addresses, and for a short while, all OSDs are located
> in a new host called "localhost.localdomain". At that point, I fixed
> it by setting the static hostname using `hostnamectl set -hostname
> xxx`.
Ben,
Thanks for the suggestion.
Changed the user and group to 167 for all files in the data and etc folders in
the grafana service folder were not 167. Did a systemctl daemon-reload and
restarted the grafana service ,
but still seeing the same error
-- Logs begin at Mon 2023-05-15 19:39:34
How do I create a user name and password that I could use to
log in to grafana?
Vlad
On 11/16/22 08:42, E Taka wrote:
Thank you, Nizam. I wasn't aware that the Dashboard login is not the same
as the grafana login. Now I have accass to the logfiles.
Am Mi., 16. Nov. 2022 um 15:06 Uhr schrieb N
Hi,
I would recommend to add the —image option to the bootstrap command so
it will only try to pull it from the local registry. If you also
provide the —skip-monitoring-stack option it will ignore Prometheus
etc for the initial bootstrap. After your cluster has been deployed
you can set t
i'm afraid that feature will be new in the reef release. multisite
resharding isn't supported on quincy
On Wed, May 17, 2023 at 11:56 AM Alexander Mamonov wrote:
>
> https://docs.ceph.com/en/latest/radosgw/multisite/#feature-resharding
> When I try this I get:
> root@ceph-m-02:~# radosgw-admin zo
I think that information on the object exists only in the postRequest
context (because only then we accessed the object).
to get the size of the object in the preRequest context you need to take it
from "Request.ContentLength".
see:
https://github.com/ceph/ceph/blob/cd5bf7d94251de4667f79591d5832e64
On 5/17/23 17:29, Conrad Hoffmann wrote:
Hi all,
I'm having difficulties removing a CephFS volume that I set up for
testing. I've been through this with RBDs, so I do know about
`mon_allow_pool_delete`. However, it doesn't help in this case.
It is a cluster with 3 monitors. You can find a co
i try deploy cluster from private registry and used this command
{cephadm bootstrap ---mon-ip 10.10.128.68 --registry-url my.registry.xo
--registry-username myuser1 --registry-password mypassword1
--dashboard-password-noupdate --initial-dashboard-password P@ssw0rd }
even i changed section Default
The release of Reef has been delayed in part due to issues that sidelined the
testing / validation infrastructure.
> On May 15, 2023, at 05:40, huy nguyen wrote:
>
> Hi, as I understand, Pacific+ is having a performance issue that does not
> exist in older releases? So that why Ceph's new rele
Hi all,
We are running the Nautilus cluster. Today due to UPS work, we shut
down the whole cluster.
After we start the cluster, many OSDs go down and they seem to start
doing the heardbeat_check using the public network. For example, we
see the following logs:
---
2023-05-16 19:35:29.254 7efcd
Hi all,
we did a major update from Pacific to Quincy (17.2.5) a month ago
without any problems.
Now we have tried a minor update from 17.2.5 to 17.2.6 (ceph orch
upgrade). It stucks at mds upgrade phase. At this point the cluster
tries to scale down mds (ceph fs set max_mds 1). We waited a f
https://docs.ceph.com/en/latest/radosgw/multisite/#feature-resharding
When I try this I get:
root@ceph-m-02:~# radosgw-admin zone modify --rgw-zone=sel
--enable-feature=resharding
ERROR: invalid flag --enable-feature=resharding
root@ceph-m-02:~# ceph version
ceph version 17.2.5 (98318ae89f1a893a6d
Hi, as I understand, Pacific+ is having a performance issue that does not exist
in older releases? So that why Ceph's new release (Reef) is delayed in this
year?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-user
I just want the latest minor version before upgrading to the next major version
:) This practice isn't recommended elsewhere, but I want to make sure and limit
errors as much as possible.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe
Thanks for your answer. I was able to get the Lua debug log. But I think some
request fields don't work:
I have this lua script, for example:
if Request.HTTP.StorageClass == 'COLD' then
RGWDebugLog(Request.RGWOp .. " request with StorageClass: " ..
Request.HTTP.StorageClass .. " Obj name: " .
Hi all,
I'm having difficulties removing a CephFS volume that I set up for
testing. I've been through this with RBDs, so I do know about
`mon_allow_pool_delete`. However, it doesn't help in this case.
It is a cluster with 3 monitors. You can find a console log of me
verifying that `mon_allow
This is interesting, and it arrived minutes after I had replaced an HDD
OSD (with NVME DB/WAL) in a small cluster. With the three profiles i was
only seeing objects/second of around 6-8 (high_client_ops), 9-12
(balanced), 12-15 (high_recovery_ops). There was only a very light
client load.
Wit
Hi Samual,
Not sure if you know but if you don't use the default CRUSH map, you can
also use custom location hooks. This can be used to bring your osds into
the correct place in the CRUSH map the first time they start.
https://docs.ceph.com/en/quincy/rados/operations/crush-map/#custom-location-h
Just a reminder that today is the last day to submit to the Ceph Days Vancouver
CFP
https://ceph.io/en/community/events/2023/ceph-days-vancouver
--
Mike Perez
Community Manager
Ceph Foundation
--- Original Message ---
On Thursday, May 11th, 2023 at 9:21 AM, Mike Perez wrote:
> Hi ev
Thought I might forward this to the users list in case anyone else is
experiencing or knows how to resolve.
Thank you,
Josh Beaman
From: Beaman, Joshua
Date: Tuesday, May 16, 2023 at 3:23 PM
To: d...@ceph.io
Subject: Public Access URL returns "NoSuchBucket" when rgw_swift_account_in_url
is Tr
Hi Rok,
try this:
rgw_delete_multi_obj_max_num - Max number of objects in a single
multi-object delete request
(int, advanced)
Default: 1000
Can update at runtime: true
Services: [rgw]
config set
WHO: client. or client.rgw
KEY: rgw_delete_multi_obj_max_num
VALUE: 1
Regar
Hey,
A question slightly related to this:
> I would suggest that you add all new hosts and make the OSDs start
> > with a super-low initial weight (0.0001 or so), which means they will
> > be in and up, but not receive any PGs.
Is it possible to have the correct weight set and use ceph osd set
thx.
I tried with:
ceph config set mon rgw_delete_multi_obj_max_num 1
ceph config set client rgw_delete_multi_obj_max_num 1
ceph config set global rgw_delete_multi_obj_max_num 1
but still only 1000 objects get deleted.
Is the target something different?
On Wed, May 17, 2023 at 11:58
multi delete is inherently limited to 1000 per operation by AWS S3:
https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html
This is a hard-coded limit in RGW as well, currently. You will need to
batch your deletes in groups of 1000. radosgw-admin has a
"--purge-objects" option
Hi,
keep in mind that deleting objects in RGW involves its garbage collector
and lifecycle management. Thus the real deletion impact may occur later.
If you are able to use radosgw-admin you can instruct it to skip the
garbage collector and delete objects immediately. This is useful for
rem
I think this is capped at 1000 by the config setting. Ive used the aws
and s3cmd clients to delete more than 1000 objects at a time and it
works even with the config setting capped at 1000. But it is a bit slow.
#> ceph config help rgw_delete_multi_obj_max_num
rgw_delete_multi_obj_max_num - Max
you could check owner of /var/lib/ceph on host with grafana container
running. If its owner is root, change to 167:167 recursively.
Then systemctl daemon-reload and restart the service. Good luck.
Ben
Adiga, Anantha 于2023年5月17日周三 03:57写道:
> Hi
>
> Upgraded from Pacific 16.2.5 to 17.2.6 on May 8
Hi,
I would like to delete millions of objects in RGW instance with:
mc rm --recursive --force ceph/archive/veeam
but it seems it allows only 1000 (or 1002 exactly) removals per command.
How can I delete/remove all objects with some prefix?
Kind regards,
Rok
38 matches
Mail list logo