Hello,
I've problem: my ceph cluster (3x mon nodes, 6x osd nodes, every
osd node have 12 rotational disk and one NVMe device for
bluestore DB). CEPH is installed by ceph orchestrator and have
bluefs storage on osd.
I've started process upgrade from version 17.2.6 to 18.2.1 by
invocating:
ceph or
Hi,
When trying to log in to rgb via the dashboard, an error appears in the
logs ValueError: invalid literal for int() with base 10: '443
ssl_certificate=config://rgw/cert/rgw.test'
RGW with SSL
If rgw is without ssl, everything works fine
ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde9
Hi,
I just upgraded from 17.2.6 to 18.2.1 and have some issues with mds.
mds started crashing with
2023-12-27T13:21:30.491+0100 7f717b5886c0 1 mds.f9sn015 Updating MDS
map to version 2689280 from mon.5
2023-12-27T13:21:30.491+0100 7f717b5886c0 1 mds.0.2689276
handle_mds_map i am now mds.0.2
Good morning everybody!
Guys, are there any differences or limitations when using Docker instead of
Podman?
Context: I have a cluster with Debian 11 running Podman (3.0.1), but the
iSCSI service, when restarted, the "tcmu-runner" binary is in "Z State" and
the "rbd-target-api" script enters "D St
Hi Jan,
IIUC the attached log is for ceph-kvstore-tool, right?
Can you please share full OSD startup log as well?
Thanks,
Igor
On 12/27/2023 4:30 PM, Jan Marek wrote:
Hello,
I've problem: my ceph cluster (3x mon nodes, 6x osd nodes, every
osd node have 12 rotational disk and one NVMe devic
We are running rook-ceph deployed as a operator in kubernetes with rook
version 1.10.8 and ceph 17.2.5.
Its working fine but we are seeing frequent OSD daemon crash in 3-4 days
and restarts without any problem also we are seeing flapping osds i.e osd
up down.
Recently daemon crash happened for 2
Solved update to 18.2.1 and no ceph orch upgrade start --ceph-version
18.2.1 (upgrade: failed to pull target image)
But ceph orch upgrade start --image quay.io/ceph/ceph:v18.2.1
And not
And not
ср, 27 дек. 2023 г. в 17:02, Владимир Клеусов :
> Hi,
>
> When trying to log in to rgb via the dashboa
Actually, there is a problem with this tarball:
https://github.com/ceph/ceph/archive/refs/tags/v18.2.1.tar.gz
corresponding to an older commit,
e3fce6809130d78ac0058fc87e537ecd926cd213, which misses some important fixes.
Maybe it should be fixed there.
The src.rpms use 7fe91d5d5842e04be3b4f51
Hi All,
A follow up: So, I've got all the Ceph Nodes running Reef v18.2.1 on
RL9.3, and everything is working - YAH!
Except...
The Ceph Dashboard shows 0 of 3 iSCSI Gateways working, and when I click
on that panel it returns a "Page not Found" message - so I *assume*
those are the three "or