--
From "Jan Pekař - Imatic"
To m...@tuxis.nl; ceph-users@ceph.io
Date 2/25/2023 4:14:54 PM
Subject Re: [ceph-users] OSD upgrade problem nautilus->octopus - snap_mapper
upgrade stuck
Hi,
I tried upgrade to Pacific now. The same result. OSD is not starting, stuck at
1500 keys.
JP
Hi,
I tried upgrade to Pacific now. The same result. OSD is not starting, stuck at
1500 keys.
JP
On 23/02/2023 00.16, Jan Pekař - Imatic wrote:
Hi,
I enabled debug and the same - 1500 keys is where it ends.. I also enabled
debug_filestore and ...
2023-02-23T00:02:34.876+0100 7f8ef26d1700
...@tuxis.nl / +31 318 200208
-- Original Message --
From "Jan Pekař - Imatic"
To ceph-users@ceph.io
Date 1/12/2023 5:53:02 PM
Subject [ceph-users] OSD upgrade problem nautilus->octopus - snap_mapper
upgrade stuck
Hi all,
I have problem upgrading nautilus to octopus on my
Hi all,
I have problem upgrading nautilus to octopus on my OSD.
Upgrade mon and mgr was OK and first OSD stuck on
2023-01-12T09:25:54.122+0100 7f49ff3eae00 1 osd.0 126556 init upgrade
snap_mapper (first start as octopus)
and there were no activity after that for more than 48 hours. No disk a
Hi all,
we have problem on our production cluster running nautilus (14.2.22).
Cluster is almost full and few month ago we noticed issues with slow peering - when we restart any osd (or host) it takes hours to finish
peering process, instead of minutes.
We noticed, that some pool contains 90k
Hi all,
I would like to "pair" MonSession with TCP connection to get real process, which is using that session. I need it to identify processes with
old ceph features.
MonSession looks like
MonSession(client.84324148 [..IP...]:0/3096235764 is open allow *, features
0x27018fb86aa42ada (jewel)
inally removed.
And yet another potential issue (or at least an additional factor) with your setup is a pretty high DB vs. Main devices ratio (1:11).
Deleting procedures from multiple OSDs result in a pretty highload on DB volume which becomes overburdened...
Thanks,
Igor
On 9/11/2020 3
Hi all,
I have build testing cluster with 4 hosts, 1 SSD's and 11 HDD on each host.
Running ceph version 14.2.10 (b340acf629a010a74d90da5782a2c5fe0b54ac20)
nautilus (stable) on Ubuntu.
Because we want to save small size object, I set bluestore_min_alloc_size 8192
(it is maybe important in thi
Hi Ben,
we are not using EC pool on that cluster.
OSD out behavior almost stopped when we solved memory issues (less memory
allocated to OSD's).
Now we are not working on that cluster anymore so we have no other info about
that problem.
Jan
On 20/07/2020 07.59, Benoît Knecht wrote:
Hi Jan,
2020, at 5:46 AM, Jan Pekař - Imatic wrote:
Each node has 64GB RAM so it should be enough (12 OSD's = 48GB used).
On 21/03/2020 13.14, XuYun wrote:
Bluestore requires more than 4G memory per OSD, do you have enough memory?
2020年3月21日 下午8:09,Jan Pekař - Imatic 写道:
Hello,
I
Each node has 64GB RAM so it should be enough (12 OSD's = 48GB used).
On 21/03/2020 13.14, XuYun wrote:
Bluestore requires more than 4G memory per OSD, do you have enough memory?
2020年3月21日 下午8:09,Jan Pekař - Imatic 写道:
Hello,
I have ceph cluster version 1
Hello,
I have ceph cluster version 14.2.7 (3d58626ebeec02d8385a4cefb92c6cbc3a45bfe8)
nautilus (stable)
4 nodes - each node 11 HDD, 1 SSD, 10Gbit network
Cluster was empty, fresh install. We filled cluster with data (small blocks)
using RGW.
Cluster is now used for testing so no client was us
8390 mlcod 60042'3048390 active+clean] log_op_stats osd_op(osd.9.60026:836317 2.c
2:311c8802:::rbd_data.09df7e2ae8944a.:7b5 [copy-get max 8388608] snapc 0=[]
ondisk+read+rwordered+ignore_cache+ignore_overlay+map_snap_clone+known_if_redirected e60042) v8 inb 0 outb 4194423 lat 19.62
Hi all,
I have small cluster and yesterday I tried to mount older RBD snapshot torecover data. (I have approx. 230 daily snapshots of one RBD image
on my small ceph).
After I did mount and ls operation, cluster was stuck and I notice that 2of my OSD's eaten CPU and raise in memory usage (more
14 matches
Mail list logo