This mgr assert failure is fixed at https://github.com/ceph/ceph/pull/46688
You can upgrade to 16.2.13 to get the fix.
Eugen Block 于2023年8月3日周四 14:57写道:
> Can you query those config options yourself?
>
> storage01:~ # ceph config get mgr mgr/dashboard/standby_behaviour
> storage01:~ # ceph conf
Am 03/08/2023 um 00:30 schrieb Yuri Weinstein:
> 1. bookworm distro build support
> We will not build bookworm until Debian bug
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1030129 is resolved
FYI, there's also a bug in Debian's GCC 12, which is used by default
in Debian Bookworm, that cau
Check out the ownership of the newly created DB device, according to
your output it belongs to the root user. In the osd.log you probably
should see something related to "permission denied". If you change it
to ceph:ceph the OSD might start properly.
Zitat von Roland Giesler :
Ouch, I got
Hi all,
I have a ceph cluster with 3 nodes. ceph version is 16.2.9. There are 7
SSD OSDs on each server and one pool that resides on these OSDs.
My OSDs are terribly unbalanced:
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL %USE VAR PGS STATUS TYPE N
Turn off the autoscaler and increase pg_num to 512 or so (power of 2).
The recommendation is to have between 100 and 150 PGs per OSD (incl.
replicas). And then let the balancer handle the rest. What is the
current balancer status (ceph balancer status)?
Zitat von Spiros Papageorgiou :
Hi
We went through this exercise, though our starting point was ubuntu 16.04 /
nautilus. We reduced our double builds as follows:
1. Rebuild each monitor host on 18.04/bionic and rejoin still on nautilus
2. Upgrade all mons, mgrs., (and rgws optionally) to pacific
3. Convert each mon, mgr
On 03-Aug-23 12:11 PM, Eugen Block wrote:
ceph balancer status
I changed the PGs and it started rebalancing (and turned autoscaler off)
, so now it will not report status:
It reports: "optimize_result": "Too many objects (0.088184 > 0.05)
are misplaced; try again later"
Lets wait a fe
Hi,
Can you show `smartctl -a` for this device?
This drives show input/output errors in dmesg when you try to run ceph-osd?
k
Sent from my iPhone
> On 2 Aug 2023, at 21:44, Greg O'Neill wrote:
>
> Syslog says the drive is not in write-protect mode, however smart says life
> remaining is at 1
I am in the process of expanding our cluster capacity by ~50% and have
noticed some unexpected behavior during the backfill and recovery process
that I'd like to understand and see if there is a better configuration that
will yield a faster and smoother backfill.
Pool Information:
OSDs: 243 spinn
I’m attempting to setup the CephFS CSI on K3s managed by Rancher against an
external CephFS using the Helm chart. I’m using all default values on the Helm
chart accept for cephConf and secret. I’ve verified that the configmap
ceph-config get’s created with the values from Helm and I’ve verified
Hi
Could you please provide guidance on how to diagnose this issue:
In this case, there are two Ceph clusters: cluster A, 4 nodes and cluster B, 3
node, in different locations. Both are already running RGW multi-site, A is
master.
Cephfs snapshot mirroring is being configured on the cluste
Hi
Could you please provide guidance on how to diagnose this issue:
In this case, there are two Ceph clusters: cluster A, 4 nodes and cluster B, 3
node, in different locations. Both are already running RGW multi-site, A is
master.
Cephfs snapshot mirroring is being configured on the cluste
Take a look at https://github.com/TheJJ/ceph-balancer
We switched to it after lot of attempts to make internal balancer work
as expected and now we have ~even OSD utilization across cluster:
# ./placementoptimizer.py -v balance --ensure-optimal-moves
--ensure-variance-decrease
[2023-08-03 23
Tried using peer_add command and it is hanging as well:
root@fl31ca104ja0201:/# ceph fs snapshot mirror peer_add cephfs
client.mirror_remote@cr_ceph cephfs
v2:172.18.55.71:3300,v1:172.18.55.71:6789],[v2:172.18.55.72:3300,v1:172.18.55.72:6789],[v2:172.18.55.73:3300,v1:172.18.55.73:6789
AQCfwMl
I've been digging and I can't see that this has come up anywhere.
I'm trying to update a client from Pacific 17.2.3-2 to 17.2.6-4 and I'm getting
the error
Error:
Problem: cannot install the best update candidate for package
ceph-base-2:17.2.3-2.el9s.x86_64
- nothing provides liburing.so.2(
Adding additional info:
The cluster A and B both have the same name: ceph and each has a single
filesystem with the same name cephfs. Is that the issue ?
Tried using peer_add command and it is hanging as well:
root@fl31ca104ja0201:/#ls /etc/ceph/
cr_ceph.conf client.mirror_remote.keying cep
Attached log file
-Original Message-
From: Adiga, Anantha
Sent: Thursday, August 3, 2023 5:50 PM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: cephfs snapshot mirror peer_bootstrap import hung
Adding additional info:
The cluster A and B both have the same name: ceph and each has a
Hi,
There is a snap ID for each snapshot. How is this ID allocated, sequentially?
Did some tests, it seems this ID is per pool, starting from 4 and always going
up.
Is that correct?
What's the max of this ID?
What's going to happen when ID reaches the max, going back to start from 4
again?
Tha
Hi,
We know snapshot is on a point of time. Is this point of time tracked
internally by
some sort of sequence number, or the timestamp showed by "snap ls", or
something else?
I noticed that when "deep cp", the timestamps of all snapshot are changed to
copy-time.
Say I create a snapshot at 1PM
Hi Anantha,
On Fri, Aug 4, 2023 at 2:27 AM Adiga, Anantha wrote:
>
> Hi
>
> Could you please provide guidance on how to diagnose this issue:
>
> In this case, there are two Ceph clusters: cluster A, 4 nodes and cluster B,
> 3 node, in different locations. Both are already running RGW multi-si
Hi Nathan,
On Mon, Jul 31, 2023 at 4:34 PM Nathan Harper wrote:
>
> Hi,
>
> We're having sporadic problems with a CephFS filesystem where MDSs end up
> on the OSD blocklist. We're still digging around looking for a cause
> (Ceph related or other infrastructure cause).
The monitors can blocklis
Hi,
In most cases the 'Alternative' distro like Alma or Rocky have outdated
versions of packages, if we compared it with CentOS Stream 8 or CentOS Stream
9. For example is a golang package, on c8s is a 1.20 version on Alma still 1.19
You can try to use c8s/c9s or try to contribute to your distr
22 matches
Mail list logo