We use rclone here exclusively. (previously used to use mc)
On 2024-11-15 22:45, Orange, Gregory (Pawsey, Kensington WA) wrote:
We have a lingering fondness for Minio's mc client mc, and previously
recommended it to users of our RGW clusters. In certain uses however
performance was much poorer t
I guess there is a missing dependency (which really should be
auto-installed), which is not also documented in the release notes as
a new requirement. This seems to fix it:
$ apt install --no-install-recommends python3-packaging
On 2024-11-26 08:03, Matthew Darwin wrote:
I have upgraded from
I have upgraded from 17.2.7 to 17.2.8 on debian 11 and the OSDs fail
to start. Advise how to proceed would be welcome.
This is my log from ceph-volume-systemd.log
[2024-11-26 12:51:30,251][systemd][INFO ] raw systemd input received:
lvm-1-1c136e54-6f58-4f36-af10-d47d215b991b
[2024-11-26 12:5
I'm on quincy.
I had lots of problems with RGW getting stuck. Once I dedicated 1
single RGW on each side to do replication, my problems went away.
Having a cluster of RGW behind a load balancer seemed to be confusing
things.
I still have multiple RGW for user-facing load, but a single RGW
When trying to clean up multi-part files, I get the following error:
$ rclone backend cleanup s3:bucket-name
2024/07/04 02:42:19 ERROR : S3 bucket bucket-name: failed to remove
pending multipart upload for bucket "bucket-name" key
"0a424a15dee6fecb241130e9e4e49d99ed120f05/outputs/012149-0
We have had pgs get stuck in quincy (17.2.7). After changing to wpq,
no such problems were observed. We're using a replicated (x3) pool.
On 2024-05-02 10:02, Wesley Dillingham wrote:
In our case it was with a EC pool as well. I believe the PG state was
degraded+recovering / recovery_wait and
Hi,
I'm new to bucket policies. I'm trying to create a sub-user that has
only read-only access to all the buckets of the main user. I created
the below policy, I can't create or delete files, but I can still
create buckets using "rclone mkdir". Any idea what I'm doing wrong?
I'm using ceph
Chris,
Thanks for all the investigations you are doing here. We're on
quincy/debian11. Is there any working path at this point to
reef/debian12? Ideally I want to go in two steps. Upgrade ceph first
or upgrade debian first, then do the upgrade to the other one. Most of
our infra is already
It would be nice if the dashboard changes which are very big would
have been covered in the release notes, especially since they are not
really backwards compatible. (See my previous messages on this topic)
On 2023-10-30 10:50, Yuri Weinstein wrote:
We're happy to announce the 7th backport rel
We are still waiting on debian 12 support. Currently our ceph is
stuck on debian 11 due to lack of debian 12 releases.
On 2023-11-01 03:23, nessero karuzo wrote:
Hi to all ceph community. I have a question about Debian 12 support for ceph
17. I didn’t find repo for that release athttps://down
ome filtering done with cluster id or
something to properly identify it.
FYI @Pedro Gonzalez Gomez <mailto:pegon...@redhat.com> @Ankush Behl
<mailto:anb...@redhat.com> @Aashish Sharma <mailto:aasha...@redhat.com>
Regards,
Nizam
On Mon, Oct 30, 2023 at 11:05 PM Matthew Darwin w
t's why the utilization charts are empty
because it relies on
the prometheus info.
And I raised a PR to disable the new dashboard in quincy.
https://github.com/ceph/ceph/pull/54250
Regards,
Nizam
On Mon, Oct 30, 2023 at 6:09 PM Matthew Darwin wrote:
Hello,
We're not using
-prometheus-api-url .
You can switch to the old Dashboard by switching the feature toggle in the
dashboard. `ceph dashboard feature disable dashboard` and reloading the
page. Probably this should have been disabled by default.
Regards,
Nizam
On Sun, Oct 29, 2023, 23:04 Matthew Darwin wrot
Hi all,
I see17.2.7 quincy is published as debian-bullseye packages. So I
tried it on a test cluster.
I must say I was not expecting the big dashboard change in a patch
release. Also all the "cluster utilization" numbers are all blank now
(any way to fix it?), so the dashboard is much less
2023-08-22 08:00, Matthew Darwin wrote:
Thanks Rich,
On quincy it seems that provding an end-date is an error. Any other
ideas from anyone?
$ radosgw-admin sync error trim --end-date="2023-08-20 23:00:00"
end-date not allowed.
On 2023-08-20 19:00, Richard Bade wrote:
Hi Matthew,
At
pefully that helps.
Rich
--
Date: Sat, 19 Aug 2023 12:48:55 -0400
From: Matthew Darwin
Subject: [ceph-users] radosgw-admin sync error trim seems to do
nothing
To: Ceph Users
Message-ID: <95e7edfd-ca29-fc0e-a30a-987f1c43e...@mdarwin.ca>
Content-Type: text/plain; charset=UTF-8; form
Last few upgrades we upgraded ceph, then upgraded the O/S... it worked
great... I was hoping we could do the same again this time.
On 2023-08-21 12:18, Chris Palmer wrote:
Ohhh.. so if I read that correctly we can't upgrade either
debian or ceph until the dependency problem is resolved,
Hello all,
"radosgw-admin sync error list" returns errors from 2022. I want to
clear those out.
I tried "radosgw-admin sync error trim" but it seems to do nothing.
The man page seems to offer no suggestions
https://docs.ceph.com/en/quincy/man/8/radosgw-admin/
Any ideas what I need to do
I have basically given up relying on bucket sync to work properly in
quincy. I have been running a cron job to manually sync files between
datacentres to catch the files that don't get replicated. It's pretty
inefficient, but at least all the files get to the backup datacentre.
Would love to
Hi Alex,
We also have a multi-site setup (17.2.5). I just deleted a bunch of
files from one side and some files got deleted on the other side but
not others. I waited 10 hours to see if the files would delete. I
didn't do an exhaustive test like yours, but seems similar issues. In
our case, l
9, rum S14
____
From: Matthew Darwin
Sent: 14 October 2022 18:57:37
To:c...@elchaka.de;ceph-users@ceph.io
Subject: [ceph-users] Re: strange OSD status when rebooting one server
https://gist.githubusercontent.com/matthewdarwin/aec3c2b16ba5e74beb4af1d49e
hint...
Hth
Am 14. Oktober 2022 18:45:40 MESZ schrieb Matthew Darwin
:
Hi,
I am hoping someone can help explain this strange message. I took 1 physical server
offline which contains 11 OSDs. "ceph -s" reports 11 osd down. Great.
But on the next line it says &qu
Hi,
I am hoping someone can help explain this strange message. I took 1
physical server offline which contains 11 OSDs. "ceph -s" reports 11
osd down. Great.
But on the next line it says "4 hosts" are impacted. It should only
be 1 single host? When I look the manager dashboard all the O
I did manage to get this working. Not sure what exactly fixed it, but
creating the pool "default.rgw.otp" helped. Why are missing pools not
automatically created?
Also this:
radosgw-admin sync status
radosgw-admin metadata sync run
On 2022-06-20 19:26, Matthew Darwin wrot
Fri, Jun 24, 2022 at 9:03 AM Yaarit Hatuka
wrote:
We added a new collection in 17.2.1 to indicate Rook
deployments, since we
want to understand its volume in the wild, thus the module asks for
re-opting-in.
On Fri, Jun 24, 2022 at 9:52 AM Matthew Darwin
wrote
telemetry!
Yaarit
On Thu, Jun 23, 2022 at 11:53 PM Matthew Darwin wrote:
Sorry. Eventually it goes away. Just slower than I was expecting.
On 2022-06-23 23:42, Matthew Darwin wrote:
>
> I just updated quincy from 17.2.0 to 17.2.1. Ceph status reports
> "Tele
Sorry. Eventually it goes away. Just slower than I was expecting.
On 2022-06-23 23:42, Matthew Darwin wrote:
I just updated quincy from 17.2.0 to 17.2.1. Ceph status reports
"Telemetry requires re-opt-in". I then run
$ ceph telemetry on
$ ceph telemetry on --license sharing-
I just updated quincy from 17.2.0 to 17.2.1. Ceph status reports
"Telemetry requires re-opt-in". I then run
$ ceph telemetry on
$ ceph telemetry on --license sharing-1-0
Still the message "TELEMETRY_CHANGED( Telemetry requires re-opt-in)
message" remains in the log.
Any ideas how to get ri
Hi all,
Running into some trouble. I just setup ceph multi-site replication.
Good news is that it is syncing the data. But the metadata is NOT syncing.
I was trying to follow the instructions from here:
https://docs.ceph.com/en/quincy/radosgw/multisite/#create-a-secondary-zone
I see there
29 matches
Mail list logo